Clean coding is our commitment to creating software solutions that stand the test of time.

Our Engineering

Clean coding is our commitment to creating software solutions that stand the test of time

Our team of software engineers are experts in building modular, scalable, and secure applications. We utilize the right set of architectural strategies, practices, modern frameworks and libraries to create reliable and flexible software solutions that can meet the evolving needs of your business.

Behavior Driven Development

It is evident that collaboration and communication between developers, testers, and stakeholders are key to delivering the right product. Therefore, Behavior Driven Development (BDD) is our natural choice to express our understanding of requirements. We transform requirements into a collection of executable specifications to ensure that everyone shares a clear understanding of software requirements and behavior.

We use Scenarios to describe the behavior from an end-user's perspective, written in natural language, which are backed by suites of tests. By automating the execution of these tests, we can validate and publish our progress more frequently.

The Scenarios help us to obtain feedback from the stakeholders and validate our understanding of the use case objectively.

We use Cucumber and PactumJS for the BDD eco-system in our engagements.

Domain Driven Design

We practice domain driven design and use bounded context to establish a clear boundary between different parts of a system.

This approach helps us to divide a complex software system into smaller manageable parts to efficiently discover the entities and business rules. Later, we design these parts to interact with other bounded contexts through well-defined interfaces and message exchanges.

This approach guides us to identify potential microservices in a complex system and enables us to deliver understandable, maintainable and scalable solutions to our customers.

Test Driven Development

We use Test Driven Development (TDD) to ensure that we build the right product and code it correctly. Our guiding mantra is “As your tests get more specific your code gets more generic”

We employ the outside-in technique and separation of concerns principle to build components, using mock objects, stubs, and seams. This allows us to test each component in isolation and defer the responsibility of building dependencies until more information is available.

We also use this approach to build a safety net when refactoring and transforming legacy code.

Headless Components

We focus on delivering data or services through an application programming interface (API) to be consumed by other applications or systems.

The success of an API often depends on how well it meets the needs of its consumers, and how easy it is to integrate and use.

We adopt an API-first approach to collaboratively prioritize the design and development of Application Programming Interfaces (APIs) before other components of the system.


This approach helps us ensure that the design promotes reuse across contexts, supports multiple clients, including web and mobile applications, and offers choices for integrations with peer systems

Additionally, this aligns our team natively with our project management methodology, where we continuously refine the behavior of the API based on user feedback and changing business needs.

We are specializing in designing APIs with GraphQL and well documented REST APIs in compliance with OpenAPI specifications.

Data Modeling and Migration

In software development, decisions related to database design can have long-lasting impacts on the system and can be difficult to change later on.

The performance of an application is heavily influenced by the design of its database schema. It's well-known that input/output (I/O) is often the primary bottleneck in software engineering.

By following the principles of domain-driven design and bounded contexts, we gain a deeper understanding of the data, including types, relationships, access patterns, constraints, and business rules.

However, gaining a complete understanding is a progressive process. To avoid falling into the trap of a bad design, we defer certain decisions and decompose applications into smaller services.


We have learned important lessons during our decades of experience, and here are some of our beliefs:

  • We embrace polyglot databases to leverage the benefits of both relational and NoSQL worlds
  • We prioritize performance from day one and favor denormalization when appropriate. Although normalization is generally good practice, denormalization can improve performance in certain cases. We permit data duplication to avoid accessing multiple tables or documents and try to avoid joins or lookups as much as possible
  • We carefully choose data types for each field to ensure data accuracy, storage efficiency, and performance. For example, using integer data types for numeric data can improve performance and storage efficiency. Changing them in production as we evolve can be a nightmare
  • We avoid stored procedures as much as possible, as they pose challenges to testability and maintainability
  • We use indexes wisely to improve query performance by creating indexes only on frequently used fields
  • We plan for scalability, designing the schema and queries for growth and increased data volume. This includes partitioning data, sharding data, and using caching to reduce database load. For example, a data range partition proves very efficient when dealing with historical data.
  • Finally, documenting the data model and schema is vital for understanding and maintaining the database over time. We document field definitions, relationships, constraints, and key design decisions and tradeoffs made during the data modeling process.

We leverage open-source migration tools like Flyway and Diesel to manage relational database schema changes across different environments.

We utilize Entity Frameworks and Object-Relational Mapping technologies such as Diesel for Rust, Hibernate for Java, and TypeORM for Node.js. By capturing schema changes, we establish continuous integration and delivery pipelines.

Designing for Failure

We design components to handle and recover from failures as quickly as possible. We ensure fault resilience by following redundancy and isolation principles.

We use several tools and techniques to ensure fault resilience, including circuit breakers, load balancers, health checks, and failover mechanisms.

To minimize the impact of errors or failures during evolution, we adopt a canary release strategy for API deployment.


Let’s work together.