ARK2/Architecture

From ARK
Revision as of 09:38, 8 December 2016 by John Layt (talk | contribs) (Container)

Jump to: navigation, search

Architecture

ARK2 has been built to modern web standards using the latest PHP development methodologies. This includes using widely supported frameworks and components in a Front-controller architecture.

Frameworks and Components

ARK2 is built on the Symfony components using the Silex Micro-Framework to manage the services and routing.

Front-Controller

Container

Why container.

Using http://pimple.sensiolabs.org/

Dependency Injection, etc

Router

Controllers

Model

View

Object-Relational Mapping (ORM)

Database Abstraction

Multi-Tenancy / Multi-Site / Multi-Config

A number of architectural issues surround Multi-Tenancy, Multi-Site and Multi-Config in an ARK instance. These primarily affect how a hosted ARK service will be run, but also how a standalone organisation will manage their ARK instances.

  • An ARK instance is here defined as a combination of ARK users and the ARK site data they are able to access, usually under a single project/brand/organisation.
  • A database is defined as a combination of a database user and the tables it can access, not the database server instance which can hold multiple database.
  • Multi-tenancy is the ability to have multiple ARK instances in a single ARK install.
  • Multi-site is the ability to have multiple sites within an ARK instance.
  • Mulit-config is the ability to have multiple ARK schemas within an ARK instance, i.e. different sites having a different config.

Choosing an architecture involves a series of trade-offs around ease-of-development versus ease-of-maintenance. The simplest solution is the current structure, where an ARK instance has a single tenant with a single config across multiple sites. There are problems with this however:

  • Each instance requires a separate code install, database and URL
  • If a single organisation wants multiple ARK schemas (say trench-based rural and a full urban SCR) they must run separate ARK instances for each schema, meaning users must remember which instance has which sites and maintain separate user IDs, and the apps using the API must know this as well.
  • Making significant upgrades to an organisation's config requires a separate ARK install
  • Scaling up to 100's of instances creates 100's of installs and 100's of databases which will make support difficult and expensive even with automation

At the opposite extreme is an architecture where a single ARK install supports multiple tenants, sites and configs in a single database. While this solves the above issues by greatly simplifying maintenance there are a number of issues here too:

  • Code and SQL is significantly more complicated, joins especially become difficult
  • Key bloat on all tables as fields required for tenant and site which may affect performance
  • Table bloat with all data being in a single set of tables which may affect performance
  • Back up and archive is an issue as the data for different tenants needs to be separated, probably requiring custom code instead of standard tools
  • Security is an issue with data access control now occurring in the app code
  • A single tenant can overload the server and take all tenants down
  • Distributing load across servers becomes difficult if not impossible
  • Upgrading an install means all site configs must be upgraded too, you cannot leave a site on an old version
  • Existing code and data would make ARK1 migration far more complex

A half-way house model would be to allow a single install to have multiple tenants, but each tenant has its own database instance:

  • A simple key structure is kept, keeping the code simple
  • Each tenants data is kept separate, solving the size, security and backup issues
  • Load can be easily distributed by moving a tenant to another server by simply moving their database and/or redirecting their url
  • Code maintenance is kept simple, but database management becomes more complex again
  • Upgrading an install will still require upgrading all sites

Note: A practical limitation is imposed by MySQL and SQLite support which only allow a single 'namespace' per database, unlike PostgreSQL and others which support multiple 'namespaces' which would allow each tenant to have separate sets of tables within the same database.

The strongest case can be made for supporting Multi-Config, primarily as a a means of allowing larger clients to host all their data inside a single install with a single set of users (including LP ourselves). This has several implications however:

  • It raises Site Code from an attribute of an item in a module, to being a key at a higher level than the modules themselves, i.e the modules available will change depending on the Site Code
  • As a consequence it substantially changes the api to add the site code above the module
  • It may make searching across site codes difficult
  •  ???

The full combination would allow a hosted ARK solution as follows:

  • Lowest price tier (£5) / mass market / community dig type sites are hosted in a single multi-tenant install, only allowed a single site/config, may not allow own domain?
  • Upgrade to lowest tier (£10) still in single multi-tenet install, but allowed say 5 sites/configs, maybe allow own domain?
  • Next tier(s) (£15/£20/£25?) gives separate install, probably in own virtual host, own domain, with unlimited sites/configs?
  • Possible top-tier for large-scale sites with guaranteed support contract

This would keep the maintenance burden on the lowest-profit sites to a minimum, while encouraging up-sells as and when needed.

Install management could be simplified by developing a set of built-in tools.

  • Installs using git, run git pull to upgrade
  • Doctrine migrations enable auto data updates
  • Auto-check function for new releases and notify admin
  • Admin panel to put site into maintenance mode, run code update, run data update

Splitting database roles may assist in this:

  • User database - allows Multi-tenant to choose if shared users for all/some tenants, or any tenant to have own users
  • Config database - The ARK configuration, schemas, forms, etc, allows multi-tenant to share all configs with all/some tenants, or any tenant to have own set
  • Data database - The ARK data, each tenant will have their own database

The framework will manage three separate database connection variables, but where the database roles are shared by a database instance then the connection objects will be the same.