“I do feel envious of back-end developers and am sorta clueless concerning what they do.”
“Full-stack devs, you amaze me with your awesome voodoo stuff!”
If you’re a front-end developer wanting to become more familiar with the back-end, the number of different concepts to learn can be overwhelming.
Maybe you want to feel the power of shipping an end-to-end production app by yourself without needing to rely on the skills of others.
Or maybe you simply want to be able to chat with your back-end teammates without feeling dumb.
Whatever your motivation, there are so many pieces to the puzzle that is a modern day web app. How do all these servers, storage and services fit together? And when (or why) would I want to use a thing?
In this article, I will give you a high-level overview of the main components used in today’s web apps. You will learn:
- each component’s core purpose and benefits
- the other components it interacts with
- examples of the commonly used technologies which implement it
A web application server is where your custom code runs that processes a client’s request (either a request for a HTML page or an API call). Your custom code will call out to other components to send or fetch data (e.g. those listed below) and then send a response back to the client.
You write this custom code in a server-side language such as Node.js, Python, PHP, Java, C# or Ruby. Each language has its own “web framework” (e.g. Ruby on Rails, ASP.NET MVC for C# or Express for Node.js). These frameworks enable the developer to write less boilerplate code to handle the request.
In addition to hosting custom application code, some web app architectures also employ a “web server process” such as Apache HTTP Server or nginx. These server processes intercept the client request before it reaches your custom code. They are used for a few reasons:
- To quickly redirect certain requests rather than having to do it via custom code.
- To serve static content (e.g. images, CSS, JS) stored on the web server’s file system faster than going through your custom code.
- Certain server-side languages (e.g. PHP) don’t have a production-grade web server built into them and so need to be launched by a dedicated web server process.
The web server machine itself where these tools and your custom code are installed can take a few forms:
- a physical machine
- a virtual private server (e.g. Rackspace, Linode)
- a managed virtual machine instance (e.g. AWS EC2, Google Compute Engine)
- a Platform-as-a-Service (PaaS) host (e.g. Heroku, AWS Elastic Beanstalk)
You may hear these different locations where your application code runs collectively referred to as “compute”.
If you are expecting traffic spikes or if high uptime is important, you may decide to install your custom code on 2 or more application servers. In this case, you will then need to use a load balancer.
A load balancer is a server which receives the request from the client and then forwards it on to a web application server.
You need to tell it the locations (IP addresses or domain names) of the web application servers to which it will be forwarding client requests. It will then send regular ping messages called “health checks” to each application server (e.g. every 5 seconds) to ensure that the server is responsive. When the load balancer receives a client request, it uses an algorithm (e.g. round robin) to forward the request to the next healthy application server.
There are 2 key benefits of using a load balancer in a web app:
- It helps maintain consistent response times by ensuring that a single web server isn’t swamped with all the requests and thus would be slower to process each one.
- It maintains high availability. If a server crashes, all subsequent client requests will still succeed as they will be routed to a healthy server and your end users won’t notice any issues.
Some examples of load balancer implementations are:
- HAProxy — open-sourced software which you would install yourself on your own virtual machine
- AWS Elastic Load Balancer - this is a suite of managed load balancers meaning that you don’t need to provision virtual servers and install the load balancing software on them. They can be effectively treated as a cloud service.
When a user enters a URL into their address bar, the browser takes the domain part of the URL (e.g.
www.google.com) and makes a call to a DNS nameserver. The nameserver sends back an IP Address location for that website’s server (e.g.
220.127.116.11). Once it has an IP address it can then send the actual request for the web page.
If your web application is using a load balancer you would configure the domain name to point to your load balancer’s domain name or IP address. If you’re not using a load balancer, then you would point the domain name directly at your application server’s domain name/IP address.
Most internet domain name registration services (e.g. GoDaddy, NameCheap) provide a DNS management console. These allow you to configure your domain names (and subdomains) to point to the location of your application.
If you wish you can also transfer your nameservers over to a cloud provider such as AWS Route 53 and manage it from there. A benefit of doing this is that you keep all your application environment configuration in the one place as well as making it easier to automate.
If you’re building a web app (or static website), you need to serve it over HTTPS to ensure secure communication between your users and your servers. There are now also SEO benefits to using HTTPS, so there’s no excuse not to use it.
This means that you need SSL certificates installed on your back-end. Specifically, you need to install them on any server which is the first point of contact for a client request. That usually means the load balancer and the CDN server, but could also be the application servers if you’re not using a load balancer.
You can use LetsEncrypt to generate a certificate for free. Alternatively, if you’re using cloud infrastructure you can use a managed service such as AWS Certificate Manager. This allows you to create and automatically renew and distribute the SSL certificates to your application servers, load balancers and CDN servers.
Almost all web applications need to persist data somewhere. In most cases, that somewhere is a form of database. The database’s main job is to persist data reliably to permanent storage and to allow that data to be retrieved back out via queries. It may also enforce some rules around the structure of the data it stores.
There are many different database implementations which we don’t have time to go into here. There are, however, 2 high-level categories that most fall under: relational databases (SQL-based) and “NoSQL” databases. Relational databases (e.g. MySql, Postgres, MS SQL Server, Oracle, SQLite) have been around for over 40 years and have been the mainstay of most web apps. During the past decade or so, NoSQL databases (e.g. MongoDB, Cassandra, CouchDB, DynamoDB) have become much more common in web apps, largely because of their scalability benefits and data structure flexibility.
You can host a database on one server but more common in production scenarios is to host it on 2 or more servers in some form of a cluster. This ensures that the database is highly available and reduces the risk of data loss, e.g. if one server’s storage gets corrupted.
In recent years, a small number of cloud-hosted “serverless databases” have become available. These are databases which you can call via an API but that you don’t need to worry about setting up servers to host them. The cloud vendor invisibly does that for you, in addition to handling things such as automated backups. Examples of these are DynamoDB (NoSQL), Firebase Realtime Database (NoSQL) and Aurora Serverless (relational).
Whilst databases are typically used to store dynamic data (e.g. generated by end users or API clients), there are certain categories of data which are not changeable by the user or which are file-based that aren’t a great fit for database storage. Examples of this are:
- Files uploaded by a user via a form
The benefits of this are that the cloud vendor stores the files securely and can make redundant copies of it to minimise the risk of data loss.
A “bucket” is the term used to denote a top-level storage folder which a web app would use to store its static content.
Blob/file storage services allow clients to access the files over a HTTP endpoint. For example, your web app’s HTML markup could simply link to the URLs of the images and CSS files stored in AWS S3. However, let’s say my user is based in Paris and my S3 storage is hosted in West U.S. — that is several thousand miles for the data to travel and so my user will observe a delay.
A CDN is a service provided by cloud vendors where they have “edge servers” distributed all across the globe. These edge servers take a copy of a file from an “origin” (e.g. a blob/file storage location). Rather than pointing at the blob storage URL for the static assets, your front-end web app would point to their CDN URL. Now rather than a round-trip of a few thousand miles, the distance between the client and the “edge” is a lot less and so the file is fetched much quicker.
Whilst a CDN acts a form of cache for static files, web applications may have a need to temporarily cache dynamic data.
For example, say there is a database query which performs a calculation over yesterday’s data whose result is frequently accessed by thousands of users every day. It doesn’t make sense to contact the database every time a user requests this data.
A solution for this is to use a caching service to store the result for a certain time period after the first user has requested it. Subsequent requests for that data will be served much faster via the cache.
A caching service is essentially a specialised type of database. The server-based clustering comments in the Database section above are all applicable here. A cache takes the form of a key-value store, where the key is a string which your application code uses to query for the data (e.g.
DailySiteStats_2018-10-17) and the value is the actual data being cached. A cache’s data is often held fully in memory which makes retrieval of the data from the cache extremely fast.
Sometimes there are tasks you need to perform that aren’t directly related to responding to a user’s request.
For example, say a user has uploaded a video which you now need to encode and watermark. But this is a long-running task so it doesn’t make sense to make the user wait while it completes. A better approach is to do this asynchronously. Your web app code creates a job message in a queue and informs your user that they will receive an email when the watermarked video is ready.
You would then have a worker task which would do the following:
- Read the message from the queue
- Start processing the video
- Once finished, save encoded copy of the video
- Send the user a notification email
- Delete message from the queue
There are 2 architectural components at work here:
- the message queue itself where your app writes jobs to
- the worker task which reads from the queue and does the processing Examples of commonly used message queue services are ActiveMQ, RabbitMQ, AWS Simple Queue Service (SQS) and Azure Queue Storage.
You can implement the worker task in a few ways:
- Scheduling a CRON job to trigger some custom code installed on your application server to read from the queue on a certain schedule
- Using a Functions-as-a-Service platform to invoke your worker code whenever a message is added to the queue (see section below)
Functions-as-a-Service is a way that you can run your own custom code (“compute”) on cloud infrastructure without needing to worry about provisioning and maintaining the servers upon which that code will be running. You may also hear this referred to as “serverless” computing (although strictly speaking FaaS is a subset of serverless).
There are many use cases for FaaS :
- Data processing jobs
- Back-end service integrations
- Automated server maintenance and orchestration tasks
- Building a web API
- Server-side HTML rendering
As serverless technologies mature and their limitations reduce, I believe FaaS and other fully managed cloud services will become much more commonplace in web app architectures over the next few years.
The good news for aspiring full-stack developers is this means that many of the above architectural concepts (e.g. application servers, load balancers) will become redundant. Their complexities are being absorbed into the cloud services you will be using. The bar to becoming an accomplished full-stack engineer is being lowered.
That leaves you to focus much more on where the real business value is — designing and writing application code for your users.
Other articles you might enjoy:
Free Email Course
How to transition your team to a serverless-first mindset
In this 5-day email course, you’ll learn:
- Lesson 1: Why serverless is inevitable
- Lesson 2: How to identify a candidate project for your first serverless application
- Lesson 3: How to compose the building blocks that AWS provides
- Lesson 4: Common mistakes to avoid when building your first serverless application
- Lesson 5: How to break ground on your first serverless project