Author: Fredrik Gustavsson

  • The democratisation of e-commerce: How technology has become invisible

    The democratisation of e-commerce: How technology has become invisible

    The e-commerce landscape has undergone a major shift. What once required teams of developers, significant capital investment, and months of custom coding now has a much faster technical setup. The big players—Shopify, BigCommerce, and similar hosted platforms—have created such robust, well-documented ecosystems that enterprise-level functionality is now accessible to solo entrepreneurs and small businesses at a fraction of historical costs.

    While the technical environment can be configured in a couple of days, launching a successful store still requires weeks of work. The hard part isn’t the technology anymore—it’s creating attractive content for buyers, setting up proper analytics, and configuring the right integrations to run your business effectively.

    The great leveling: Enterprise features for everyone

    Just three to five years ago, features we now take for granted would have cost tens of thousands of euros to implement. Multi-currency support, advanced inventory management, automated tax calculations, abandoned cart recovery, analytics dashboards, and seamless payment processing required custom development or expensive enterprise solutions.

    Today, these capabilities come standard or are available through affordable apps and integrations. A small boutique can now offer the same checkout experience as a major corporation. The technology barrier that once separated small businesses from large enterprises has essentially disappeared.

    The ecosystem effect has been transformative. When platforms like Shopify reached critical mass, they attracted thousands of developers building specialised apps, themes, and integrations. This created a positive cycle: more apps attracted more merchants, which attracted more developers, which created better solutions at lower prices. The result is a marketplace where complex e-commerce functionality is ready to use.

    The shifting battleground, from backend to frontend

    With the technical infrastructure commoditized, the competitive battleground has shifted dramatically. Success no longer hinges on having the most sophisticated backend system or the cleanest code. Instead, it’s about capturing attention and converting visitors in an increasingly crowded digital marketplace.

    Video is becoming the new storefront

    The shift toward video content represents the biggest change in how products are discovered and sold online. TikTok, Instagram Reels, YouTube Shorts, and other platforms have trained consumers to expect engaging content. Static product photos now feel outdated.

    Video allows for storytelling and emotional connection in ways that traditional e-commerce pages cannot. A 15-second video showing a product in use can communicate value better than long product descriptions. The line between entertainment and shopping continues to blur.

    Most importantly, this is all happening on mobile devices. The vast majority of B2C purchases now happen on smartphones, with only a small fraction coming from desktop computers. This mobile-first reality changes everything about how brands need to communicate.

    The paid media challenge

    Organic reach on social platforms has dropped, making paid advertising necessary for visibility. The advertising platforms have grown more advanced, with AI-driven targeting, automated bidding, and cross-platform tracking becoming standard. Small businesses can now run advertising campaigns with precision that was once only available to large agencies.

    However, this has made competition more intense. As barriers to entry have lowered, more players have entered the field, driving up advertising costs and making creative content crucial. The businesses that succeed are those that can create compelling content at scale while speaking directly to their target audience.

    The AI revolution: The next frontier

    Artificial intelligence is poised to reshape e-commerce discovery and shopping experiences in profound ways. We’re already seeing early implementations that hint at what’s coming:

    Personalised shopping assistants

    AI-powered chatbots and shopping assistants are evolving beyond simple FAQ responses to become sophisticated personal shoppers. These systems can understand context, remember preferences, and make nuanced recommendations based on browsing behavior, purchase history, and even external factors like weather or trending topics.

    Visual search is changing product discovery

    The way customers find products is changing significantly. Visual search allows customers to upload photos or take pictures and find similar items instantly. This technology is shifting how businesses think about product discovery – from keyword optimisation to visual optimisation. Instead of relying only on text descriptions, products need to be visually discoverable through AI-powered image recognition.

    Predictive commerce

    AI is enabling businesses to anticipate customer needs before customers realize them. By analyzing patterns in browsing behavior, purchase history, and external data, systems can suggest products at precisely the right moment in the customer journey.

    Dynamic pricing and inventory

    Real-time pricing adjustments based on demand, competitor analysis, and stock levels is becoming mainstream. AI can adjust prices across thousands of products instantly, maximising both conversion rates and profit margins.

    The paradox of technological simplicity

    Here’s the interesting paradox of modern e-commerce: as technology has become more sophisticated, running an online business has become less about technology. The complexity has been moved into platforms, apps, and services that simply work.

    This democratisation means that success increasingly depends on human skills: understanding customer needs, creating compelling content, building authentic relationships, and crafting memorable brand experiences. The technical barriers that once protected established players have largely disappeared.

    The winners will be brands that can meet customers where they are – on mobile devices, on social platforms, in the moments when they’re ready to buy. Success comes from differentiated communication that speaks directly to specific target groups, not generic messaging that tries to appeal to everyone.

    Brand strength matters more than ever because when the technology is the same for everyone, what differentiates you is how well you connect with your audience.

    Entrepreneurs today can focus on what matters most: finding the right product-market fit, understanding their customers, and building meaningful connections. The infrastructure that supports these efforts has become invisible, reliable, and affordable.

    When complexity requires a different approach

    While platforms like Shopify and BigCommerce work well for smaller brands, enterprise e-commerce often needs a different solution. When you’re dealing with multiple countries, multiple currencies, complex product catalogues, integrations with backend systems, POS systems, and B2B sales channels, the one-size-fits-all approach reaches its limits for now and trying to do it the simple way may cause a mess in a non maintainable solution.

    However, the same democratisation concept that benefits small businesses can be applied to enterprise solutions. Instead of building everything from scratch, modern enterprise e-commerce leverages best-in-class components that work together.

    At Nexer, we’ve created hybrid solutions that give you the best of both worlds. By combining specialised components like Storyblok for content management, Norce Commerce for complex e-commerce functionality, and Algolia for search, we can build enterprise-grade solutions faster and more cost-effectively than traditional custom development. With Owlstreet – the integration tool for digital commerce, the integrations are configured using pre built connectors to allow the project to focus on the value rather than the technology.

    These solutions include state-of-the-art design based on user experience research and standard components that meet accessibility requirements. The difference is that instead of starting from zero, we’re assembling proven components that already handle the complex parts.

    This approach means enterprise brands can benefit from the same trend toward democratisation – getting sophisticated functionality without the traditional time and cost barriers.

    Looking forward: the importance of adaptability

    As we look toward the future of commerce, the trend toward technological democratisation will likely continue. New AI tools, emerging platforms, and evolving consumer behaviours will continue to reshape the landscape. The businesses that thrive will be those that remain adaptable, customer-focused, and willing to experiment with new approaches to discovery and engagement.

    The future belongs not to those with the most advanced technology, but to those who can best connect with customers in an increasingly digital-first world. Technology has become the table stakes; everything else is about the human connection.

    The commerce of the future is here, and it’s more accessible than ever. The question isn’t whether you can build it—it’s whether you can capture hearts and minds in a world where everyone can build it.

  • Why bother learning golang?

    Why bother learning golang?

    As a solution architect, tech lead, lead developer or developer you are often faced by a task at hand to solve and it is natural to solve it using the tools that you are comfortable using. In most cases, the better you are familiar with your tools, the solution will be implemented in a good way and the problem solved with few issues once deployed to production. This works most of the times, but not always!

    One about the great things about Moores Law is that the what ever problem we were not able to solve yesterday, we will be able to solve tomorrow due to the ever increasing hardware made available to run more CPU cycles and doing more work per cycle than ever before.

    Today, you could do almost anything by implementing a full stack solution based on a modern Javascript framework and a type safe language on top of it such as Typescript. Even if Javascript in the nodejs environment has it’s limitations, the tools at hand can make those limitations go away by just adding more computing power and memory.

    Is Javascript a slow and really CPU-intensive language where you are not able to run blazing fast applications? The answer must be “It depends”. Every C# or golang coder will come up with graphs and solutions to show what nodejs is not the way to go even for small programs.

    The race

    The best way to find out is probably to try to perform the same task by the same program written in two different languages. Let’s go for the so often JSON-parsing of data where the volume of the data to parse is 10Mb and to do it 100 times for reading and writing data 100 times.

    Before running the test, I was pretty sure that golang would outperform Javascript every day of the week and I gave Javscript the advantage of upgrading the node environment to version 21 and golang using version 1.20.

    This is the source code for index.js

    The first surprise and what I was not expecting was that when I did the setting of the dataSize to be 10Mb, then I got a time for the nodejs version of the program running in less than 10 seconds on my Macbook Pro Intel.

    I run it several times to get a goot output.

    I was not expecting this to be so fast, so I wrote the same program using golang which is supposed to be a really fast and efficient language.

    Now let’s see how golang is performing. This code is not written in any specific way where it is optimized for the target environment.

    My program developed using golang was performing well completing the task in 16,5 seconds. It’s really predictible and writing and then reading 10Mb JSON-data 100 times is really an ok response time to get it done in 16 seconds, but wait!! This is not what I expected. The node program is running faster than my compiled binary golang program.

    Why would I use golang?

    Now speed is complex and for now even with Just in Time (JIT) compilers, languages and tools such as nodejs and PHP are catching up on speed but still there are advantages with a language where the output is a binary file created by a compiler. It is also in the structure of how the environment handles multiple threads and what kind of locking that happens between threads.

    There are other benefits with golang for building a backend solution that I think is worth mentioning and why I’m often using Go as my backend choice.

    • The built in standard library in Golang is pretty competent and not so many additional modules are required as external dependencies. With fewer external dependencies, it is easier to keep the application up to date without the requirements to have a complex set up of libraries with versions that need to match.
    • Widely used application architectures for how to structure the applications.
    • There are a couple of frameworks that stand out for making it easier to be a developer. My favourites are:
      • Gin Web Framework https://gin-gonic.com/ which adds the missing parts of the standard library to building proper applications.
      • Fiber https://gofiber.io/ where the version 2 has been there for a long time and the version 3 is on it’s way and that hopefully soon will be released.
    • Type safety which nowdays with Typescript is not really a good reason, but since it is happening build time a lot of the potential problems are solved as long as you avoid the dynamic structs in the general case.
    • Several options for working with databases, either to select to go through one of the ORMs such as gorm https://gorm.io/ but also good options when working directly with SQL.

    I would select golang when the project is going to be larger and a bit more complex and I will require a rock solid infrastructure with few dependencies and when there are CPU- or IO-intense applications that need to be maintained over time. The ability to write tests even for API routes is really good thanks to the framework featuers in Gin.

    When is Nodejs and Typescript the right choice?

    If I would have asked myself this question two years ago, I would have claimed that even if there are good frameworks for server side rendering and a way to build a typescript application with a backend I would still not build a backend in Nodejs. This is no longer true and I often build backends using Nodejs and Typescript. There are excellent choices to move forward.

    I would however not build without a framework that will keep track of the dependencies so that the work well together. Going library mode where I would select only the libraries of my choosing is really easy to get lost in the upgrade nightmare.

    Hono, https://hono.dev/ is one of the great options for how to build a web application with a backend. It is based on standards and allows for type safety over RPC operations. This is an excellent choicie if you wish to deploy your backend i.e. as Cloudflare workers.

    Nextjs, https://nextjs.org is now rather easy to deploy and run outside of Vercel if you would choose to do so. It has a steeper learning curve and has been undergoing a lot of structural changes to become the framework it is today.

    So for a small solution only requiring a basic backend with a database and an ORM (I’m often using Prisma https://www.prisma.io/) since it gives me a really easy and type safe way to interact with a database and one of the frameworks for running the application.

    I would go this way for a smaller application where time is of the essence and the logic is not that complex. With only a couple of methods and objects, it is amazing how fast you can build a full stack application based on this technology and not all applications need to have support for thousands of concurrent users.

    Bun is going to change it all

    You are not bound to only use nodejs as the runtime environment, there is this option that I’ve been trying out for some time https://bun.com/ which claims to be 100% nodejs compatible. With the speed I’m seeing in my tests with bun, I’m definitly going to considering now once it has come past version 1.0 (currently version 1.2).

    So when do I use what?

    Did this article not provide a valid answer to which solution you should choose? I guess that it is because the correct answer depends on what you wish to do, who is going to do it and how much support you would have from AI-tools for building it.

    All of the options presented here are really good options for a backend infrastructure.

    So I will end this post by just leaving the choice to you.

  • Getting started with Typescript and Express

    Getting started with Typescript and Express

    There are many tutorials in how to get started with Typescript and Express, but I still wish to contribute with my own favorite for how to set up a project even if it’s a microservice or a bit larger piece of code. To fully understand my setup, I will need to publish a couple of posts so here we go.

    Why Typescript?

    As a developer for many years, I like to be able to discover any problems early and to be able to use a type safe code even when building a small project that would be easy to create just in plain Javascript. After discovering Typescript, it allows me to better interoperate with types from different languages in the world of REST interfaces.

    Typescript does a great job in discovering problems before they happen by compiling and the output is still understandable as javascript code.

    What about express?

    The express server is really quick and neat to set up and start using. It is fast and does its job really good. It works well in a Docker environment, but is more or less still required to have a proxy in front of it in a production environment.

    Getting started

    First, you need to have npm to be able to install and run the nodejs infrastructure. Once you download nodejs, npm is included. It is the only native package that you need to install, even if I recommend using an editor such as Visual Studio Code.

    Once nodejs is installed, you can create your first project. We will be using the command line to build this.

    First we need to create a new application using a package. This is done by running the command “npm init -y” to init the application with default settings. You can of course edit them later.

    fgn@Fredriks-MBP:typescript-project$ npm init -y
    Wrote to /Users/fgn/dev/typescript-project/package.json:
    
    {
      "name": "typescript-project",
      "version": "1.0.0",
      "description": "",
      "main": "index.js",
      "scripts": {
        "test": "echo \"Error: no test specified\" && exit 1"
      },
      "keywords": [],
      "author": "",
      "license": "ISC"
    }

    Now you need to install typescript. You need to install it globally to support the typescript command to run on your machine.

    Issue the command to install typescript on your computer and in the project. The sudo is sometimes required to gain access to writing the files.

    sudo npm install typescript -g

    If you already have typescript installed globally, you just need to install it as a package.

    npm install typescript

    Then, to initialize the configuration file for Typescript, you could now use the command to create a new tsconfig.json file with comments and some default options selected. It will be created in the current directory.

    tsc --init

    Now open your favourite editor and open the file tsconfig.json. Edit the three settings:

    “sourceMap”:true,
    “outDir”: “./build”,
    “rootDir”: “./src”,

    This is to have a structure for sources and other files separated.

    Now you can add the additional packages to your project before we start writing the code. This will install the types and the package express so that we could get an endpoint up and running.

    npm install express @types/express 
    

    The code for the project

    The code could be implemented in just one file in this case, but for the upcoming blog posts, we will add some middleware etc to the project.

    Create the file src/server.ts that will contain the code for the express server.

    mkdir src
    code .

    Now we will create a simple service that will return just the status of 200 and return a text string of OK when requesting it using HTTP and port 4000.

    First we will go through the service row by row and then you will find the complete code below (even if it’s really short).

    import express = require('express')

    This will import the express server package so that we could refer to it as express in the code.

    const app : express.Application = express();
    const portNumber: number = 4000;

    This will create a new constant, the app that is typed as an express.Application and then assigned to be a new express() server by invoking express() as a method. We also assign the port number here in a constant. Later on, we would get the port number from configuration.

    app.get('/', (req, res) => {
      res.send('OK').sendStatus(200);
    });

    Register a path and the verb “GET” for the request with path /. Once a request hits the service, this method will be invoked in case of path /. It will send back the result to the res variable, which is bound to the result.

    In this case, it will return the string “OK” and the status of 200 OK.

    app.listen(portNumber, () => {
      console.log(`Listening to port ${portNumber}`);
    });

    Now it’s time to register the listener of the port configured above. Note the quotes that will allow us to refer to variables within the string as a ${variable} notation.

    The complete code is here.

    import express = require('express')
    
    const app : express.Application = express();
    const portNumber: number = 4000;
    
    app.get('/', (req, res) => {
        res.send('OK').sendStatus(200);
    });
    
    app.listen(portNumber, () => {
        console.log(`Listening to port ${portNumber}`);
    });

    Now it’s time to make the code compile by adding a build task to the project. Open your package.json and add the row to build the project using typescript compiler tsc.

    Set the “main” to “build/server.js”, remove the “test” from scripts and add the “start”:”tsc && node ./build/server.js” instead. This is since build/server.js will be the primary file to run the project. Running the start script will later on in the blog post show how to build and start the server.

    {
      "name": "typescript-project",
      "version": "1.0.0",
      "description": "",
      "main": "build/server.js",
      "scripts": {
        "start": "tsc && node ./build/server.js"
      },
      "keywords": [],
      "author": "",
      "license": "ISC",
      "dependencies": {
       "@types/express": "^4.17.6",
        "express": "^4.17.1",
        "typescript": "^3.9.5"
      }
    }
    

    Now run the command to build the project.

    npm run start

    This will build the file build/server.js which is what is required to be able to run the server. You can then run the server and build it at the same time using the start command.

    Testing the service using either a simple tool such as curl or postman will give the result of our service

    curl --verbose --get http://localhost:4000

    Watching directory for changes

    One great feature of programming languages that are not compiles is that once a file is changed, the project is reloaded and it just continues with no extra restarts of the scripts.

    This is of course available also when using Typescript by the package ts-node-dev which which do just this. So back to our terminal and install another package and the ts-node to run typescript.

    npm install ts-node ts-node-dev

    To support running this command instead of the one above, just create a new script in the package.json named “dev” which will run the typescript in development mode watching any changes to any source file with a starting poinf of our server.ts.

    “dev”:”ts-node-dev ./src/server.ts”

    {
      "name": "typescript-project",
      "version": "1.0.0",
      "description": "",
      "main": "server.js",
      "scripts": {
        "dev": "ts-node-dev ./src/server.ts",
        "start": "tsc && node ./build/server.js"
      },
      "keywords": [],
      "author": "",
      "license": "ISC",
      "dependencies": {
        "@types/express": "^4.17.6",
        "express": "^4.17.1",
        "ts-node": "^8.10.2",
        "typescript": "^3.9.5"
      }
    }

    Now you will be able to run your server and watch for any changes made where it will restart automatically.

    npm run dev

    Now a change of the source files would result in a restart.

    This is all for now and I do hope that you have been able to set up your first Typescript-Express piece of code. In the next post, we will look into middlewares.