Thinking@Scale Yan Qi     About     Feed

Data System Design - Reliability, Scalability and Maintainability

Design Data-Intensive Applications - Chapter 1

A successful data system should be able to meet various requirements while solving the data problems, including functional requirements and nonfunctional requirements. The functional requirements are often application specific, describing what to be done with the data and how the results can be achieved. In the nonfunctional requirements, there are many factors affecting the design and implementation, wherein three aspects are so important that all should consider throughout the development cycle: reliability, scalability and maintainability.

Any data system is a software developed by humans, deployed and run in the environment composed of hardwares. It is important to make the system work correctly even when faults occurs. The problem can be caused by hardware faults (e.g., network interruption, disk failure, power outage), software issues (e.g., bugs) and human errors (e.g., misconfiguration, mistakes in operation). Reliability introduces the concept and a guidance to exploit fault-tolerance techniques.

In the real world, the data system grows as it has an input with larger data or traffic volume, often more complex. We need a precise measurement of load and performance, based on which the strategies can be taken to keep performance constant, therefore achieve good scalability.

Additionally a data system often has a long life cycle, thus its maintainability plays a critical role in the course of evolution. As it suggests, “good operations can often work around the limitations of bad software, but good software cannot run reliably with bad operations”. Both engineering and operations teams ought to work together, sometimes grow with the system.

The book Designing Data-Intensive Applications gives a good discussion and helps with a guidance in the design of data system with reliability, scalability and maintainability considered. Here I presented the first chapter, as the start of a long journey.

Career Planning

Long View Approach - Career Planning

In the past 20 years, the human life expectancy has been improved significantly, and the retirement age has been rising. In other words, the retirement is starting later but lasting longer. People used to think that careers would be over when they are around 40s. However, it may not be even at the halfway point. Actually people tend to underestimate the length of a career. Therefore it is necessary to plan for a long career journey, especially if a successful career is concerned.

Generally careers can be divided into three stages:

  1. Start strong in the first 15 years of the career;
  2. Reach high in the middle;
  3. Go far near or even beyond retirement.

The book The Long View tries to introduce us to a set of career mindset, framework, and tools, to help us learn how to collect the ‘fuel’ to achieve our career goals at the different stages. As a result of reading and learning, I made a presentation based on the book, hopefully it could highlight the main points.

Clean Architecture

Built with simple rules: water, air, sun, gravity

Software development has many similarities with building construction. There are a few of rules that seem simple, like the physics of gravity in the real physical world or the single responsibility principle (SRP) in the programming. However, not all developers can necessarily use them well, especially in the complex scenario. An architect should have a good understanding of those principles first, and grow a pair of sharp eyes to see through the complexity, such that she can apply those rules to achieve a clean architecture.

Uncle Bob in his book, Clean Architecture: A Craftman’s Guide to Software Structure and Design gives a detailed description on those principles. More importantly he tries to explain how clean architecture can be achieved with the help of them. As a result of reading and learning, I made a presentation based on the book, hopefully it could highlight the main points.

Clean Agile

Teamwork is everywhere, especially important in the human society. For example in the software project, it cannot be emphasized too much as long as more than one person get involved. There can be many aspects affecting the performance of teamwork. The keys are about communication and collaboration.

In my not-so-long career as software engineer, I found one of biggest challenges that prevent developers from delivering a successful software product is due to the communication gap between them and their business partners. Many failures could be avoided if both parties are able to sync-up earlier. However, timing is not the only factor. The communication may lead nowhere if the common language is absent. The business people often use a human language, like English to describe what they need, or the specifications; whereas, developers prefer more formal languages, typically thinking of translating the business specifications into code (e.g., acceptance tests). This difference clearly causes a challenge.

Agile tries to address the challenge faced by a small group of software developers with a feedback-driven approach. Therefore a software project is composed of many small cycles, each of which aims to provide a working or deliverable product that their business partners can review and both sides would discuss and decide what to do next. Instead of particular rules or steps, Agile emphasizes a set of principles and values, and encourages to cultivate a culture out of those. The book by Robert C. Martin, Clean Agile: Back to Basics gives a very clear explanation on these values and principles. Furthermore, it provides quite a few guides for applying Agile in practice. As a result of reading and learning, I made a presentation based on the book, hopefully it could highlight the main points.

RecordBuffer - A Data Serialization Approach in DataMine

Data serialization is a basic problem that every data system has to deal with. To provide an efficient solution, a data serialization approach should be able to arrange the data into a compact binary format which is independent of any particular application. Nowadays there are some open-source projects on the data serialization system, such as Avro, Protocol Buffer and Thrift. These projects have reasonable large communities and are rather successful in different application scenarios. They are generally applicable in different applications, more or less following the similar ideas when serializing the data and providing APIs for message exchanges. These approaches are for general purpose and usually working well. Additionally it is also possible for them to work with other data formats such as Parquet to provide variety of options.

However, they could do better when applied to the data with nested structure. For example, the in-memory record representation may consume the similar memory even though only a few of columns have meaningful values. On the other hand, it is possible to improve the deserialization performance with the help of index.

DataMine, the data warehouse of Turn, exploits a flexible, efficient and automated mechanism to manage the data storage and access. It describes the data structure in DataMine IDL, follows a code generation approach to define the APIs for data access and schema reading. A data encoding scheme, RecordBuffer is applied to the data serialization/de-serialization. RecordBuffer depicts the content of a table record as a byte array. Particularly RecordBuffer has the following structure.

  • Version No. specifies what version of schema this record uses; it is required and takes 2 bytes.
  • The number of attributes in the table schema is required and takes 2 bytes.
  • Reference section length is the number of bytes used for the reference section; it is required and takes 2 bytes.
  • Sort-key reference stores the offset of the sort key column if exists; it is optional and takes 4 bytes if exists.
  • The number of collection-type attributes uses 1 byte for the number of collections in the table, and it is required.
  • Collection-type field references store the offsets of the collections in the table sequentially; note that the offset of an empty collection is -1.
  • The number of non-collection-type field reference uses 1 byte for the number of non-collection-type columns which have hasRef annotation.
  • Non-collection-type field references sequentially store the ID and offset pair of columns with hasRef annotation if exist.
  • Bit mask of attributes is a series of bytes to indicate the availability of any attributes in the table.
  • Attribute values store the values of available attributes in sequence; note that the sequence should be the same as that defined in the schema.

Different from other encoding schemes, RecordBuffer has a reference section which allows index or any record-specific information. Having index in the reference section can locate the field (like sort-key) directly, simplifying data de-serialization significantly. On the other hand, the frequently-accessed derived values can be stored in the reference section to speed up data analytics. This is quite useful when nested data are allowed. For example, a summary on the nested attribute values can be derived and stored in the reference section, such that the deserialization of the nested table (usually very costly) can be avoid when applying aggregation to the attribute.