Over the last 30 years, object-orientated programming and relational databases have been two of the principal approaches for building large applications. That grip is loosening and other programming paradigms are regaining popularity, partly due to an inherent tension between how programming objects and relational databases work. The relational database is still a very effective tool for storing data.
Programming with relational databases and object-oriented programming follow very different patterns. An object contains everything it needs to fulfil the needs of a single thing. A database looks after the persistent data for many things, usually broken down into their component parts and which may be linked to other things. There are a number of different approaches to reconcile these differences. These include object-relational modelling (ORM), document/object storage databases, wrappers for the database and, of course, writing CRUD SQL statements within the code object methods.
There’s a better way to do it, find it
At Yobota we write core banking software, and so data integrity is extremely important. We also take a modern approach to software development where changes can be implemented quickly. We needed to pick the best of the data access approaches above to achieve these goals. An early design goal was the loose coupling and object encapsulation provided by document storage with the tight data controls of a fully relational database. This has not been easy to implement, but the approach taken has proven effective.
As previously mentioned, the integrity of the internal data is of paramount importance here, and so this largely rules out a document or key/value stores for core data. However, these have their place to run in the overall platform. ORMs are easy to implement on the inbound side but tend to come with their own problems on implementation (find a DBA without ORM performance stories!). Writing SQL within the application allows fine control over what’s going on, but as applications change the regression and unit test challenges make the approach impractical.
There is one team
PostgreSQL has always been known for its transactional adherence to ACID principals and MVCC. That means it reliably supports demanding, varied workloads. It has also had good JSON handling for several versions. The JSON functions are fast, comprehensive and, with a bit of practice, easy to use. This has allowed us to put a consistent JSON based perimeter around the data store but can still be managed by the database. This not only allows applications to interact via a consistent, secure interface but also means all changes to the data layer interface and the data storage platform are managed by one team.
Proactive, ready to react
We have adopted a consistent internal data access approach which is rigorously followed within the database. Data from an application arrives as JSON where it is validated and deconstructed. It can then be passed to the central data store. As this all happens within the database, it can be deployed as a single unit. Database changes can happen before an application change is implemented as the interface is just JSON. In addition, all ingress is logged and so if the JSON sent to the database changes before the internal structure has changed the update can be replayed with relative ease.
At the heart of the data store is a normalised, carefully designed relational database. This allows for strong relationships between data, structural integrity and architected data which can be used for management and business intelligence (MI and BI) to drive our business and help our customers grow.
Yobota continues to grow, and we aim to combine an absolute commitment to reliability and integrity with easy to use and clear interfaces for developers. The approach we have taken with the data systems allows these goals to live in harmony as we are on the path to making financial services more intelligent, flexible and always about the customer.