4x Affordable, 99.95% SLA, 24x& Video Support, 100+ Countires

5 Common Cloud Server Setups For Your Web Application

Introduction

When deciding which server architecture to use for your environment, there are many causes to consider, such as performance, scalability, convenience, reliability, cost, and ease of management.

Here is a database of commonly used server setups, with a short description of each, including pros and cons. Keep in mind that all of the ideas covered here can be used in dissimilar combinations with one another, and that every environment has dissimilar requirements, so there is no single, correct configuration.

1. Everything On One Server

The whole environment resides on a single server. For a typical web application, that would include the web server, application server, and database server. a common variation of this setup is a lamp stack, which stands for linux, Apache, MySQL, and PHP, on a single server.

Use Case: Good for setting up an application quickly, as it is the uncomplicated setup viable, but it offers tiny in the route of scalability and element separation.

Everything On a Single Server

Pros:

  • uncomplicated

Cons:

  • Application and database contend for the same server resources (CPU, Memory, I/O, etc.) which, aside from viable unfortunate performance, can make it arduous to determine the source (application or database) of unfortunate performance
  • Not readily horizontally scalable

Related sessions:

  • How To Install LAMP On Ubuntu 14.04

2. Separate Database Server

The database management system (dbms) can be separated from the rest of the environment to eliminate the resource contention between the application and the database, and to increase security by removing the database from the zone, or public internet.

Use Case: Good for setting up an application quickly, but keeps application and database from battling over the same system resources.

Separate Database Server

Pros:

  • Application and database ranks do not contend for the same server resources (CPU, Memory, I/O, etc.)
  • You may vertically scale each rank separately, by increasing more resources to whichever server needs increased capacity
  • being on your setup, it may increase security by removing your database from the zone

Cons:

  • Slightly more complex setup than single server
  • Performance issues can become if the network connection between the two servers is high-latency (i.e. the servers are geographically deep from each other), or the bandwidth is too low for the amount of data being transferred

Related sessions:

  • How To Set Up a far Database to Optimize Site Performance with MySQL
  • How to move a mysql Database To a brand-new Server On Ubuntu 14.04

3. Load Balancer (Reverse Proxy)

Load acrobats can be increased to a server environment to enhance performance and reliability by giving the workload across aggregate servers. If one of the servers that is load balanced fails, the other servers will handle the incoming traffic until the failed server becomes healthy again. It can also be used to serve aggregate applications through the same domain and port, by using a place 7 (application place) reverse proxy.

instances of app able of reverse proxy load balancing: HAProxy, Nginx, and Varnish.

Use Case: helpful in an environment that requires measuring by increasing more servers, also known as crosswise measuring .

Load Balancer

Pros:

  • Enables crosswise measuring , i.e. environment capacity can be measured by increasing more servers to it
  • Can preserve against DDOS assaults by maximum case connections to a sensible amount and rate

Cons:

  • The load balancer can become a performance narrowing if it does not have enough resources, or if it is configured poorly
  • Can inform qualities that demand more consideration, such as where to perform SSL termination and how to handle applications that demand adhesive sessions
  • The load balancer is a single point of failure; if it goes down, your whole service can go down. A high convenience (HA) setup is a structure without a single point of failure. To learn how to implement a ha setup, you can read this part of How To Use Floating IPs.

Related sessions:

  • an introduction to HAProxy and Load Balancing ideas
  • How To Use HAProxy As a place 4 Load Balancer for WordPress Application Servers
  • How To Use HAProxy As a place 7 Load Balancer For WordPress and Nginx

4. HTTP Accelerator (Caching Reverse Proxy)

a http activator, or caching HTTP reverse proxy, can be used to reduce the moment it takes to serve content to an user through a show of methods. The important method employed with a http activator is caching responses from a web or application server in memory, so time requests for the same content can be served quickly, with less unnecessary action with the web or application servers.

instances of app able of HTTP acceleration: Varnish, seafood, Nginx.

Use Case: helpful in an environment with content-heavy non-stative web applications, or with many commonly accessed records.

HTTP Accelerator

Pros:

  • Increase site performance by reducing CPU load on web server, through caching and compression, thereby increasing user capacity
  • Can be used as a reverse proxy load balancer
  • Some caching program can safeguard against DDOS assaults

Cons:

  • Requires tuning to get best performance out of it
  • If the cache-hit rate is low, it could reduce performance

Related sessions:

  • How To Install Wordpress, Nginx, PHP, and Varnish on Ubuntu 12.04
  • How To Configure a clustered Web Server with Varnish and Nginx
  • How To Configure Varnish for Drupal with Apache on Debian and Ubuntu

5. Master-Slave Database Replication

One path to upgrade performance of a database system that performs many reads compared to writes, such as a cms, is to use master-slave database replication. Master-slave replication requires a master and one or more slave nodes. In this setup, all updates are sent to the master node and reads can be given across all nodes.

Use Case: Good for increasing the read performance for the database rank of an application.

Here is an instance of a master-slave replication setup, with a single slave node:

Master-Slave Database Replication

Pros:

  • Improves database read performance by spreading reads across slaves
  • Can upgrade write performance by using master exclusively for updates (it spends no moment serving read requests)

Cons:

  • The application accessing the database must have a mechanism to determine which database nodes it should send modify and read requests to
  • Updates to slaves are asynchronous, so there is a chance that their proportions could be out of date
  • If the master fails, no updates can be performed on the database until the issue is corrected
  • Does not have built-in failover in case of failure of master node

Related sessions:

  • How To Optimize WordPress Performance With MySQL replication On Ubuntu 14.04
  • How To Set Up Master Slave replication in MySQL

Example: Combining the Concepts

It is feasible to load balance the caching servers, in addition to the application servers, and use database replication in a single environment. The purpose of combining these methods is to reap the merits of each without informing too many issues or quality. Here is an instance drawing of what a server environment could look like:

Load Balancer, HTTP Accelerator, and Database Replication Combined

Let's assume that the load balancer is configured to accept nonmoving requests (like graphics, css, javascript, etc.) and send those requests directly to the caching servers, and send other requests to the application servers.

Here is a description of what would happen when an user sends a requests non-stative content:

  1. The user requests non-stative content from http://instance.com/ (load balancer)
  2. The load balancer sends question to app-backend
  3. app-backend reads from the database and returns requested content to load balancer
  4. The load balancer returns requested data to the user

If the user requests nonmoving content:

  1. The load balancer checks cache-backend to see if the requested content is cached (cache-hit) or not (cache-miss)
  2. If cache-hit: return the requested content to the load balancer and leap to stride 7. If cache-miss: the cache server forwards the ask to app-backend, through the load balancer
  3. The load balancer forwards the ask through to app-backend
  4. app-backend reads from the database then returns requested content to the load balancer
  5. The load balancer forwards the response to cache-backend
  6. cache-backend caches the content then returns it to the load balancer
  7. The load balancer returns requested data to the user

This environment still has two single points of failure (load balancer and master database server), but it provides the all of the other reliability and performance advantages that were described in each part above.

Conclusion

Now that you are acquainted with some basic server setups, you should have a good concept of what category of setup you would use for your own application(s). If you are working on enhancing your own environment, remember that an aspect processes is best to evade informing too many qualities too quickly.

Let us know of any setups you recommend or would like to learn more about in the comments below!

By Mitchell Anicas
Reference: digitalocean