Full Stack Hiring – Apply Now

Update May 2021 – we hired 5+ developers in early 2021, and are now looking to hire 5+ more.

Oasis Digital is growing – maybe you should be on the team! We are again looking for more great developers. All the usual information is available on our careers page.

All the info is there of course, including a video with lots more about working here. Here are the key aspects:

  • The core of our team works at our headquarters in St. Louis MO – at least during non-pandemic times. But more of us also work around the USA (and a few around the world) on our customer projects.
  • We mostly hire full-time employees, with contractors for certain circumstances.
  • We hire at a wide range of levels; entry-level software developers eager to grow fast, through grizzled gurus who already have the deep understanding and experience our most critical customer needs.
  • No “take-home interview coding test”; rather we will program with you at the relevant stage of the hiring process to get to know each other, and see your abilities in action.
  • The most common key skill for new hires is Angular, React, or one of the other competing frameworks.
  • “Full stack” is best, though many new hires start in one layer of the stack (for example, in the browser, on the server) and learn the rest over time.
  • Please apply via our site linked above – that is machinery we use, resumes that arrive any other way tend to get lost.
  • Our hiring process is about the same as in this 2018 post.
  • Feel free to reach out by email (address at the lower right of our web site) and ask questions about the job, to figure out if you may want to apply.

Also, a bit about our philosophy of hiring:

  • Hiring is a marketing-like process, not a sales-like process. We aim to get the word out so that the right people find us.
  • Hiring is a collaborative process of looking for the best match, not a race or competition.
  • We hire people whose time is valuable, so we optimize for rapid assessment and understanding.

Blog posts can get stale. Our current hiring is always listed on our careers page!

NgRx is 40x faster than your code – find out why

When I started using @ngrx/store to hold collections of information, I usually put the data into the store as a JavaScript array. It seemed to be the simplest and most appropriate data structure for the information. However, when @ngrx/entity came out, I saw that it used a different pattern – instead of using the array directly, it converts the array to two data structures; an array of ids and an object map keyed by those ids. Why did they do this? And is there a lesson we can learn for our own code?

Continue reading NgRx is 40x faster than your code – find out why

Advanced, Angular-related training and mentoring

About five years ago (it feels like forever) Oasis Digital started training on Angular. Our flagship course Angular Boot Camp has become quite popular, we’ve taught it many hundreds of times to many thousands of students. For the first few years, this offering was a perfect fit for almost every company that contacted us, as software teams were initially adopting Angular. Over the last few years though, Angular has become mature and robust, and Angular has achieved broad adoption across organizations large and small. Aggregate needs of Angular teams inevitably shift toward bigger scale, more difficult and important uses of the technology.

As a result, our training efforts have substantially pivoted toward more advanced topics.

Continue reading Advanced, Angular-related training and mentoring

Thinking hard about a project launch

Here at Oasis Digital, we are always agile, and depending on the project needs, sometimes use Agile (in the “capital A” sense) processes extensively. Yet regardless of agility, iterations, steering, and so on, the planning and decisions made at the beginning (and in the early months) of a project often have profound and very difficult to change consequences later.

Therefore, while we would never argue for the straw-man Waterfall, we aim to think very hard about a project at the beginning.

Continue reading Thinking hard about a project launch

Querying without OR in Firestore

Background

Here at Oasis Digital we have successfully used the Firebase Realtime Database, and more recently the (beta as of July 2018) Firebase Firestore. These similarly branded offerings have important feature differences, and the latter appears likely to be the recommended choice in the future.

Firestore is a globally scalable, fully managed, document oriented NoSQL database. It is suitable for a very small team to build an application which could then scale to a vast user base with very little system administration work; of course, there are feature trade-offs which enable these amazing properties. Notably, Firestore has important structural limits on the types of queries that can be performed. For example, it has no joins, no “OR” criteria, and limited range (inequality) queries. As I understand, these limitations are what make it possible to engineer Firestore operations to “cost” (and therefore be priced!) in proportion to the amount of data returned.

We recently implemented a Firestore application (with Angular, Angularfire2, and Firebase Functions) in which the query limitations initially were an obstacle; but we found solutions capable of producing great results nonetheless.

Querying workflow state

There are countless scenarios for querying a data store, of course. A simple, common such scenario is an application in which each “document” represents an entity that moves through a workflow over its lifespan. The problem domain doesn’t matter; but one common concrete example is order workflow in a e-commerce system. Such a workflow could look something like this:

New -> Verified -> Scheduled -> In progress -> Preparing -> Shipped -> Delivered -> Closed

Or more generally, think of a workflow feature as the movement of an entity through a series of states:

S1 -> S2 -> S3 -> S4

(In both examples I have drawn simple, linear flows – most of our production/customer software has workflows with looping, branching, and other complex considerations.)

In a system with workflow features, it is very common to need to query a group of entities which are in a certain state – and also very common to query the entities within a set of states. For example, a feature might tally or otherwise view “all orders that have not yet shipped”, which comprises 5 states in the example workflow above.

Firestore “schema” design

Given the absence of OR queries in Firestore, and the frequent need to query entities that are in one state OR another, how should we represent workflow state in a Firestore implementation?

With a traditional, relational database, the main driver of data modeling is to concisely represent the underlying data and ideally “make illegal states unrepresentable”. A representation oriented for this kind of database should follow one of the normal forms, mathematically justified decades ago.

With Firestore (as with most NoSQL data stores), data modeling has a different main driver. An application instead stores data in a way to enable whatever kinds of queries are needed, given the query capabilities. This potentially implies a significantly different data layout, although we often start with something similar to a traditional RDBMS schema then diverge as needed.

Keeping that in mind, show should we store “what state is this entity in?” in a document in Firestore?

Approach 1: Single state field

The simplest solution is one field to represent the state of the entity represented by a Firestore document. The data storage looks something like this:

state: ‘Scheduled’

Querying entities in a single state is trivial with such representation:

.where(‘state’, ‘==’, ‘Scheduled’)

Querying entities in several states requires running a separate query per state, then combining the results together in client code.

Unfortunately, this “combine results in client code” approach, though mentioned in the documentation, has unpleasant consequences. Consider a case where there are 500 entities in state S1, and 500 more entities and state S2, along with some other fields (perhaps “due date”) that ranks all of the entities. Then try to write a query (or pair of queries) to retrieve the 500 “soonest” entities that are in either of these two states:

.where(‘state’, ‘==’, ‘S1’).orderBy(‘dueDate’).limit(500);

.where(‘state’, ‘==’, ‘S2’).orderBy(‘dueDate’).limit(500);

(Your client code would combine the results, sort by due date, and discard all but the first 500 of the combined list.)

Unfortunately, this query now potentially “costs” twice as much, in both time and money, as it should; it may query up to 1000 documents only to discard 500 of them. Still, with this extra cost, time, and client-side query implementation, the single state field approach does work.

Approach 2: One field per state

With this next approach, entity state is represented by a set of flags, one for each state. Typical data could look something like this:

state: {
    New: false
    Verified: false
    Scheduled: true
    InProgress: false
    Preparing: false
    Shipped: false
    Delivered: false
    Closed: false
}

This is more verbose, but easy to understand and implement. Querying for a single state is as easy as before:

.where(‘state.Scheduled’, ‘==’, true)

At first glance, the limitation on OR queries appears to stymie a multiple-state query with this schema. However, with a bit of Boolean logic these OR operations can be swapped out for ANDs and NOTs in the right combination. Transform the desired OR of all the states you want to exclude – into a query which instead excludes all the states you want to exclude.

Continuing with the order-management example, imagine we want all of the orders in the first three states. The query looks something like this:

.where(‘state.InProgress', '==', false)

.where(‘state.Preparing', '==', false)

.where(‘state.Shipped', '==', false)

.where(‘state.Delivered', '==', false)

.where(‘state.Closed', '==', false)

This is simple to implement, mechanically. A bit of utility code could perform the correct set of WHERE operations, leaving application code straightforward.

How might this scale? This is an unknown – these are a lot of ANDs (especially for an entity with many states), which might need more Firestore indexes, or might stress Firestore query mechanism in unexpected ways, or might hit a limit on the number of allowable (or advisable) ANDs.

This is probably the best approach for a small number of states.

Approach 3: Combinatorial state fields

Given how well mathematical logic worked in the previous approach, what if we took it further? When writing an updated state of an entity/document, application code (utility code) could emit all of the combinations of states that include the current state. Concretely, consider the S1/S2/S3/S4 example. If an entity is in state S2, that could be represented like so:

state: {
    S1: false, // or omit the ‘false’ entries
    S2: true,
    S3: false,
    S4: false,
    S1_S2: true,
    S1_S3: false,
    S1_S4: false,
    S2_S3: true,
    S2_S4: true,
    S3_S4: false,
    S1_S2_S3: true,
    S1_S2_S4: true,
    S1_S3_S4: false,
    S2_S3_S4: false,
    // S1_S2_S3_S4 not necessary, would always be true
}

This approach pre-computes the answers to all possible queries of sets of states. Such a representation could be generated easily and consistently by utility code. Queries, again with the bit of utility code to generate them, can find all documents (entities) in any set of states with a single WHERE. For example, to look for all documents that are in state S2 or S3:

.where(‘state.S2_S3’, '==', true)

This will have efficient query characteristics, but could run into limitations around the number of allowable fields in a single Firestore document. Also, with Firestore there is an index for each of these fields, and each index increases the Firestore storage costs and makes document updates a bit slower.

Approach 4: Partial combinatorial state fields

This approach is like the previous approach, with one optimization: rather than pre-generate all of the possible combinations, instead only generate those combinations which enable queries the application actually performs. Typically an application will only use a a few handfuls of combinations of states that an application ever queries for – vastly fewer than the number of combinations of states – which as you may recall from math courses, involves factorials.

The obvious downside: if an application later needs to query a different combination of states, it must first update every document to add the new bit of data, or fall back to another approach.

Approach 5: State range with a numeric field

Up to this point, every approach would work with any kind of workflow, regardless of its linearity. But many applications have a primarily or entirely linear workflow – and that can be used to ease the query challenge. Consider a simple numerical indicator of the state:

state: 3 // entity is in state S3

This enables state “range” queries, like so:

.where(‘state’, '<=', 2) // Entities in state S1 or S2

.where(‘state’, '>=', 3) // Entities in state S3 or S4.

It’s also possible to query any contiguous range of states:

.where(‘state’, '>=', 2)

.where(‘state’, '<=', 3)

Unfortunately, this approach has a serious downside: a Firestore query can only do range queries on a single field. Therefore with this approach, it is impossible to query questions like “entities in state S2 or S3, ordered by due date, first 50”, because the single range query “slot” is used up by the state part of the query. Because of this obstacle, it’s hard to recommend this approach.

Approach 6: State range with boolean fields

Fortunately, there is another variation of the range idea that is quite workable. Use a flag for each state, marked as true if the entity is at or after that state. So for example, an entity at state S3 would be like so:

state: {
    S1: true,
    S2: true,
    S3: true,
    S4: false,
}

Each flag represents a range of states; as before, utility code could implement this representation generically and reliably. Conceptually, a single WHERE can find all entities before or after a point in the workflow. For example, these queries split the state space before/after the S2/S3 boundary:

.where(‘state.S3’, '==', false) // Entities in state S1 or S2

.where(‘state.S3’, '==', true) // Entities in state S3 or S4.

A pair of WHERE clauses (implicitly ANDed) can find all the entities in any contiguous range of states. For example, to find all entities in S2 or S3:

.where(‘state.S2’, '==', true)

.where(‘state.S4’, '==', false)

This approach both avoids an factorial expansion in the number of state variables, avoids major query complexity, and keeps the inequality-query “slot” available for other uses, and is therefore likely an excellent choice if states are mostly linear.

Conclusions and the future

For an application being implemented today on Firestore, which has workflow scenarios where the lack of OR queries is an obstacle, likely one of the above approaches will do a sufficient job. Looking forward to the future, it seems likely the Firestore team will find ways to expand the query capabilities without loss of performance – perhaps rendering these approaches obsolete.

 

Printable reports in a Node application

Imagine your shiny new web application, JavaScript from end to end (perhaps Node plus Angular/React/Vue/etc) offers a great set of features and a highly interactive user interface. Then a key decision-maker wanders by to praise the interactive features and ask where they click to obtain detailed printable reports like those generated by all the predecessor systems for the last few decades. Uhoh.

It turns out that report still matter, sometimes they land on paper, other times just as PDF files easily passed around without access to the original application. Here are our thoughts on how to effectively generate reports from a Node application. There are many options, but these are what we most commonly see and use.

1: Print the relevant application page

By far the simplest way to get a “report” from a web application is to use the browser’s built-in print capability. To “print a report”, the user navigates to where they see the data they wish to print, then they choose print in the browser. That’s it.

Many pages yield a somewhat poor report from that output by default, but a CSS media-query print stylesheet can rearrange things enough to produce passable results for simple cases. We recommend setting up such print stylesheets, and trying to print pages that have a report like nature, even if another reporting technique is also used – the ability to print a web page has been in browsers since nearly the beginning, and offers a low-cost way to get more value from the same application.

2: Headless-browser-based reporting

Printing an application page in the browser means being limited to whatever HTML is relevant to display in the browser, and further being subject to the vagaries of different browsers. Instead, it’s possible to reuse the same tools (HTML, templating, CSS) on the server to generate specific content for report printing.

To do this, choose any of the highly numerous node HTML templating systems, perform data access however you do so for application features, gather up the data for report, and emit the HTML/CSS. Then use a headless browser (on the server) to transform that HTML and CSS to a PDF, then make it available for the user to download.

This has some compelling advantages:

  • Familiar programming model – as a developer you use approximately the same tools for report output that you use for screen output
  • As a result, it’s easy to get started with relatively little to learn
  • Well suited to reports that generally feel like “documents”, such as dunning letter reports.

Disadvantages:

  • HTML, and therefore this approach, have little in the way of traditional reporting software features
  • Layout tooling for HTML is screen oriented, not report oriented

Implementations include:

3: Reporting library / API

There are Node libraries which offer an API generally suitable to create reports. Unlike the HTML approach, a reporting centric API will have features directly suited for report output, such as creating tabular data output aligned by decimal point, looping over data to put rows in such a table, and so on. This typically will be considerably more concise than HTML approach, because the API will be closer to the problem domain (the domain of “make a report”).

A notable disadvantage here is that the vocabulary of report such an API can produce tends to be much more limited. This is inherent in abstraction, the higher level an API, the easier it is to produce results, but the more constrained the results.

Implementations include:

4: Low-level PDF drawing API

Rather than a high-level API, and there are also low-level APIs available to programmatically create PDF files. A low-level API will have operations like “place text on the page with the following formatting” and “draw a line from coordinate to coordinate”. This low-level, full control means that any report or other output can be produced, but the coding effort to do so can be significant.

This approach typically makes sense only for cases with very specific reporting needs. It is too labor-intensive to create numerous reports this way.

Implementations include:

5: Call a traditional report tool

Lastly, reporting can be thought of as a separate subsystem, whose implementation need not be bound into the same platform as the rest of a system. With this approach, reporting functionality is generally omitted from the application backend, and instead implemented using an off-the-shelf report tool. There is a busy “enterprise” reporting tool market, with multiple very mature products. Costs vary widely, but these tools provide the kind of reporting experience developers may remember from years past: a visual report design surface, a way of interactively running and tweaking and drilling into a report, and so on.

Advantages:

  • Extensive layout possibilities
  • Visual layout tools, rather than code only
  • Report design can often be done by non-developers
  • The reporting solution may offer tools for managing and customizing a large library of reports, out-of-the-box

Disadvantages:

  • Separate subsystem, different tool for staff to learn
  • Deployment complexity, of deploying an additional product rather than only adding a library
  • Cost: some of these are very “enterprise” products, with prices to match

Implementations include:

Choosing an approach

The general trade-off among these approaches is that both cost and capability increase in roughly the order presented above. An acceptable CSS print-specific-stylesheet might yield acceptable results in a few minutes; a separate reporting subsystem, with a new tool stack to learn, could involve a team laboring for months. Oasis Digital has used each approach above (except the headless broader approach, as headless browsers just recently became popular) with excellent results.

Loopback 3, TypeScript, and Custom Connectors

Loopback is a powerful Node.js API framework built on top of Express that comes with a lot of functionality in-the-box. Recently, I gave a talk about creating APIs with Loopback in the context of building Angular web apps. In that talk I created a vanilla Loopback API using the Loopback CLI and connected the resulting API to the Angular Tour of Heroes demo application. While the CLI allows for easy configuration of Loopback’s JSON files via simple command-line operations, there are times when you need to write code to expand the functionality of your API, especially when the backing storage is not an off-the-shelf database but a custom enterprise API. In cases like this, the best solution is often to write your own connector. Also, many developers would like to use TypeScript with Loopback. While Loopback 4 will use TypeScript by default, version 4 has not yet been released. Although Loopback 3 does not use TypeScript, any Loopback 3 project can be converted to use TypeScript today. In this article I will explain how to convert any Loopback 3 project to TypeScript and also how you can expand your API’s capabilities by creating your own connector.

Background and Motivation

TypeScript

By default, Loopback projects are configured with JSON files and coded in JavaScript by default. While motivating TypeScript over JavaScript is beyond the scope of this post, there are many persuasive arguments for using types as a first line of defense against bugs. As a first-order approximation, TypeScript is merely typed JavaScript, and TypeScript readily transpiles into JavaScript. Thus, as we will see, TypeScript can afford type safety in any Loopback project with minimal headache.

Architecture of Loopback

Briefly, Loopback represents groups of data abstractly as models that interact with backing storage via connectors. Incidentally, a configured instance of a connector is called a data source. In Loopback, a model represents the schema of one instance of a certain kind of data, and a connector enables any number of models to interact with backing storage. There are many connectors available, such as MongoDB, MySQL, and PostgreSQL, which are installed as NPM packages.

The idea behind this separation of concerns is that you can describe the shapes of – and relationships between – your data separately from describing how to retrieve or update that data. For instance, if one were building an API to represent a hospital, a “physician” model could be created that contains properties such as specialty, years practiced, and gender, and a separate “patient” model containing, for example, properties for a patient’s age, gender, address, and phone number. Then, each model could be connected to backing storage via a connector.

Furthermore, the connector does not have to be the same for each model: physicians could be stored on a MySQL database and patients stored on a MongoDB instance, for example. The relationship between each patient and a physician can then be handled fully inside Loopback by constructing what are called relations.

For more information about the architecture and typical usage of Loopback, see my Loopback talk.

Loopback Connectors

Since Loopback has many connectors available as npm packages for different kinds of storage, the model-connector architecture works very well when the backing storage is an off-the-shelf instance of, for example, Postgres, MongoDB, or even Elasticsearch. However, when your model must interact with a custom API, you are largely left with the following three options:

  1. Use the Loopback REST connector
  2. Write custom code directly inside the model
  3. Create a custom connector

The first option only works if the API is sufficiently RESTful, and the second results in code that is not shared between models. Thus, the best way to enable your Loopback API to interact with non-RESTful APIs is often to write your own connector.

When Should TypeScript be Compiled?

When converting a JavaScript project to TypeScript, one can generally choose to either run the project in ts-node – the TypeScript version of Node – or compile the project with the TypeScript compiler and run the resulting JavaScript output with the standard version of Node. Although the ts-node option avoids the need to explicitly compile each time the source code is changed during development, it also implies using ts-node in production, which we generally avoid in favor of Node itself.

Thus, I will assume that our goal is to compile from TypeScript source rather than running the TypeScript project in-place. The end result will be a server directory containing TypeScript source files and JSON configuration files, and a build directory that contains the compiled JavaScript files along with the same JSON configuration files. To do this, we will use the TypeScript compiler and an npm CLI utility to copy Loopback’s configuration files to the build directory.

Unfortunately the Loopback CLI will not work on the TypeScript project. However, the CLI can still be useful by performing actions on a test Loopback project, checking how the changes affected the JSON configuration files, and performing the same changes on the TypeScript project by hand. We have found that, after using Loopback enough, it can be faster to perform actions, such as creating models, by hand rather than using the CLI.

Converting a Loopback Project to TypeScript

When creating a new project with the Loopback CLI, a JavaScript project is created by default. These steps assume the project is a fresh CLI-generated project, however the general approach applies to any Loopback project.

To convert a Loopback CLI-generated project to TypeScript, we can take the following steps:

1. Create a ‘build’ directory in the Loopback project’s root level for the output JS and JSON files
2. Run ‘npm i –save-dev typescript’ in the project to install TypeScript as a development dependency
3. Create a ‘tsconfig.json’ file in the root level with “outDir” set to “build/server” and “include” containing an entry “server/**/*.ts.” An example ‘tsconfig.json’ file:

{
"compilerOptions": {
    "target": "es6",
    "module": "commonjs",
    "noUnusedLocals": true,
    "noUnusedParameters": true,
    "outDir": "build/server",
    "sourceMap": true,
    "noImplicitAny": true
  },
"include": [
    "server/**/*.ts"
  ]
}

Example tsconfig.json file

4. Run ‘npm i –save-dev @types/node’ to install the TypeScript types for Node.js
5. Rename all .js files to .ts and fill in types. As a tip, setting “module.exports = value” in TypeScript can be achieved with “export = value”.

export = function enableAuthentication(server: any) {
  // enable authentication
  server.enableAuth();
};

Example of converting a Loopback source file to TypeScript

6. Run ‘npm i –save-dev copyfiles’ which installs an npm CLI utility that copies files
7. In ‘package.json’:

  1. Edit “main” to point to “build/server/server.js”
  2. Add a “compile” script that performs “tsc && copyfiles \”server/**/*.json\” build/server -u 1” to copy the JSON configuration files and preserve the directory structure

8. Run ‘npm start’ to start your API server!

{
  "name": "loopback_ts",
  "version": "1.0.0"
  "main": "build/server/server.js"
  "engines": {
    "node": ">=4"
  },
  "scripts": {
    "lint": "eslint .",
    "start": "node .",
    "compile": "tsc && copyfiles \"server/**/*.json\" build/server -u 1",
    "posttest": "npm run lint && nsp check"
  },
...

The resulting package.json file after starting with a blank CLI project

build
  server
    boot
    component-config.json
    config.json
    datasources.json
    middleware.development.json
    middleware.json
    model-config.json
    server.js
    server.js.map
node_modules
server
...

The resulting directory structure after converting to TypeScript

Now the project has been converted to TypeScript! Next steps include:

  1. Configuring linting
  2. Adding a “clean” script to clean the build directory using, for example, rimraf or rimraf-standalone for “rm -rf” cross-platform compatibility, and
  3. Setting up a directory for the client application to live in

These are left as an exercise for the reader or a follow-up article.

Writing and Using a Custom Connector

Using an existing connector generally involves installing the connector with “npm install,” adding the connector as a data source (either by hand in the JSON or via the Loopback CLI), and using the data source with a model (again, either by hand or via the CLI). This works because Loopback looks for connectors in the node_modules directory, where npm packages are installed. Thus, there are generally two ways to incorporate a custom connector into a Loopback project: publish your connector with the prefix “loopback-connector-” in the name as a repository, for example on GitHub, and install it with “npm install,” or place the code inside your Loopback project and use a JavaScript hook to instantiate the connector as a datasource in code. Here we describe the latter option since in general we would not like to have to publish every custom connector that we write.

The following two code snippets show the boilerplate TypeScript code that is required to create a new, custom model and connect the connector to a model. In the connector code, when Loopback creates a new Data Source from a connector, it calls the connector’s exported “initialize” function by passing a Data Source object and a callback function. The initialize function creates a new instance of the connector and initializes pointers in the Data Source and connector objects to point to each other. The constructor of the connector initializes any properties of the Data Source that were passed as properties when the Data Source was created.

export class MyConnector {
  dataSource: any;
  propertyName: string;

  constructor(settings: any) {
    // Initialize properties here:
    this.propertyName = settings.properties.propertyName;
  }

  // Implement connector methods here (see Table 1)
}

export function initialize(dataSource: any, callback: Function) {
  const connector = new MyConnector(dataSource.settings);

  dataSource.connector = connector;
  connector.dataSource = dataSource;

  callback();
}

Boilerplate code for a custom connector

import * as loopback from 'loopback';

import * as MyConnector from 'path/to/the/connector';

const myDataSource = (loopback as any).createDataSource('dataSourceName', {
  connector: MyConnector,
  properties: {
    propertyName: 'Hello, World!'
  }
});

export = function (myModel: any) {

  // Connect model to data source
  myModel.attachTo(myDataSource);

};

How to use the JavaScript “createDataSource” hook to connect a custom connector to a PersistedModel

Supporting PersistedModel in the Connector

After creating the boilerplate code, the logic of the connector must be implemented as methods in the connector’s class. Since the most common use case of Loopback models is the PersistedModel, which generally represents any model that is persisted in a backing data storage, we focus on using custom connectors with a model that declares PersistedModel as its base class.

As the Loopback documentation explains, the PersistedModel is the base class for most built-in models, and the vast majority of Loopback model use-cases rely on the PersistedModel as a base class. The PersistedModel provides standard create, read, update, and delete (CRUD) operations and exposes REST endpoints for them. Since we are creating a custom connector, the connector must implement methods that the PersistedModel’s CRUD operations use.

After the Data Source is attached to the PersistedModel, specific methods in the connector are called to create, retrieve, update, or delete data based on the source PersistedModel endpoint. Table 1 shows which connector methods are called for which PersistedModel endpoints. As we can see, only a few connector methods support a wide variety of endpoints.

PersistedModel Endpoints Connector Method(s) Called
PATCH /modelName
PUT /modelName
POST /modelName
POST /modelName/replaceOrCreate
create
GET /modelName
PATCH /modelName/{id}
GET /modelName/{id}
GET /modelName/findOne
all
HEAD /modelName/{id}
GET /modelName/{id}/exists
GET /modelName/count
count
PUT /modelName/{id}
POST /modelName/{id}/replace
replaceById
DELETE /modelName/{id} destroyAll
POST /modelName/update update
POST /modelName/upsertWithWhere all
create

Table 1: Connector methods that must be implemented to support the given PersistedModel endpoints

To implement the connector methods, the parameters of these methods must be discovered. This is left as an exercise for the reader, however an easy way is to declare several parameters, log them to the console, call the associated endpoint(s), and observe the console output. In general, data is passed first, followed by an authorization object and a callback function.

Disabling Remote Methods

Finally, if any PersistedModel endpoints are not needed, they can be disabled using “disableRemoteMethodByName” as shown in the code snippet below. This particular snippet disables all but the immutable endpoints of a PersistedModel. The only caveat to using this JavaScript hook is that any endpoints that are not static methods of the model’s class belong to the model’s prototype and must be declared as such with “prototype.methodName”.

export = function (myModel: any) {

  // Connect model to data source
  myModel.attachTo(myDataSource);

  // Disable mutable and unimplemented endpoints
  myModel.disableRemoteMethodByName('createChangeStream');
  myModel.disableRemoteMethodByName('upsert');
  myModel.disableRemoteMethodByName('updateAll');
  myModel.disableRemoteMethodByName('upsertWithWhere');
  myModel.disableRemoteMethodByName('create');
  myModel.disableRemoteMethodByName('replaceOrCreate');
  myModel.disableRemoteMethodByName('replaceById');
  myModel.disableRemoteMethodByName('deleteById');
  myModel.disableRemoteMethodByName('count');
  myModel.disableRemoteMethodByName('prototype.updateAttributes');

};

Disabling some of the endpoints that come with a PersistedModel in-the-box. This snippet disables all but the immutable endpoints.

Conclusion

Although Loopback 4 will use TypeScript natively, it has not been released at the time of writing this article, and many would like to use TypeScript with Loopback 3 today. While Loopback 3 makes it very easy to create APIs based on off-the-shelf databases via JavaScript and connectors that are available as NPM packages, it is typically not clear how to convert a Loopback 3 project to TypeScript or create custom, unpublished connectors that will only be used within local projects. As we have seen, it is indeed possible to convert a Loopback CLI project to TypeScript, create a custom connector locally, and attach this connector to a PersistedModel. After copying short bits of boilerplate, only 6 methods have to be implemented to support 16 endpoints. Finally, since a connector can then be reused across many models, a sufficiently general connector can be reused within one project or across many projects.

Software Product Quality at Oasis Digital

A long-studied topic

Decades ago, business guru Philip Crosby famously defined quality as “conformance to requirements”. This definition seems useful in software development only to the extent every aspect of the software has been comprehensively understood and written down – rarely the case in real projects.

Fewer decades ago, software and consulting guru Gerald Weinberg slightly less famously wrote that “Quality is value to some person” – an insight more applicable to our context here at Oasis Digital, consultants and developers of custom software products. (Incidentally, to gain dense insight into software development and other topics in well-written tidy packages, read Weinberg’s books.)

Still, to point out that quality is whatever someone (typically a paying customer) says it is, doesn’t help all that much with a problem we face regularly.

I want a high quality software product, but what does that mean?

At Oasis Digital, customers often come to us with a vision, or partial written requirements, for a software product or system. Around this essential kernel, there are numerous possibly-implied desires or requirements, many related to quality.

What should an organization (or person) want, if they generally want a good, high quality product? We think of the answer to this as the implied requirements for high-quality software.Recently the team here started gathering a list of quality attributes (special thanks to Paul Spears for kicking this off on a whiteboard for all to see). Here is our checklist of desirable attributes, requirements, meta-requirements, and other aspects of quality or otherwise “good” software.

Quality checklist

The software we work on most often has both server-side and web/tablet/mobile UI, so our checklist contains a somewhat broad mix of topic areas.

Functionality

  1. The software works “on the happy path”; it has all the specified desired features.
  2. The software handles numerous potential error conditions well; it fails gently, and visibly. It recovers, or fails clearly if it can’t.
  3. The software implements a workflow at least as friendly to users as envisioned; ideally even more so.
  4. The software augments, rather than consumes, human mental bandwidth while using it.
  5. The features are generally composable where appropriate. That is, when a pair or set of features are more valuable when used together, they can be used together and work as expected.
  6. The software conforms to legal or regulatory requirements to which the subject; achieving this often requires cooperation among developers, customer representatives, and sometimes experts in compliance. In some projects this may be a minor aspect, while in others it is a primary defining motivator.

Support and operations

  1. The software captures logs of events that go wrong (and generally also of things that go right); it does so in a manner suitable for aggregation and analysis, with generally well considered log levels, a machine-readable log format, and so on.
  2. The software has features suitable to help with support efforts; for example it shows what version of the software is in use, helpful error data is exposed (in log as well as, where possible, on screen) rather than discarded, etc.
  3. The software is operations-friendly. It has switches, features, or other attributes helpful for operations teams responsible for keeping the software working.
  4. The software does not forget facts to which it has been exposed; where technically feasible, it has an append-only, log-structured view of the world. This both supports debugging efforts and future not yet known requirements.

Appearance and behavior (UI/UX)

  1. The user interface conforms to some design system, its layout and appearance are not completely subjective and ad hoc, but follow some understood and well considered guidelines for appearance, layout, etc.
  2. Unless the software is either very large (with a large budget), or is an art piece (such as a game), it does not “go its own way” with an ad hoc design.
  3. The user interface is aesthetically pleasing, in the subjective sense. (To whom?)
  4. The user interface further has suitable animations or dynamic style behavior, suitable for its design system.
  5. The user interface is themeable; at least its colors, and possibly other aspects, can be adjusted to fit in coherently with other software that uses some defined color palette. The user interface code should therefore use appropriate color theme variables or similar mechanism, not be hardcoded to match a design system or ad hoc requests.
  6. The user interface is responsive; it makes reasonably good use of a wide range of screen sizes. It is not a fixed size for a single screen size, unless its target (embedded) deployment environment is similarly strictly limited.
  7. The user interface does not suffer the “keyhole problem”; when presenting the user with a significant amount of data, it makes good use of the display to show the user many options and useful context.  http://www.aristeia.com/TKP/draftPaper.pdf
  8. To the extent of the user interface presents data in tabular form, the tables presents numeric and text content with suitable alignment.
  9. The user interface features the variable contents (data) and more prominently than fixed labeling; a well-chosen design system generally will achieve this goal out-of-the-box.

Operating / human environment

(As of this writing, most of the software we work on has a web user interface, and that shows in this checklist.)

  1. The software supports all current browsers, and possibly (depending on target deployment environment) one or more obsolete browsers as needed.
  2. The software has good accessibility characteristics, including testing with a screen reader or similar assistive technology.
  3. The user interface visually scales well in response to user font size overrides; it does not attempt to block the user from changing the font size, and its layout remains usable across a range of font sizes.
  4. The user interface contrast levels (as supported by the design system) are high enough to pass accessibility testing.
  5. Color is used effectively to maximize the speed of comprehension; but no information is ever presented only in the form of color, so that the software remains workable for users who don’t perceive color fully.
  6. The software is reasonably compatible with its platforms internationalization capabilities; and if needed, has been (or can be) suitably localized.

Performance and throughput

  1. The software has been tested, and works acceptably, with a realistic date volume. It is often necessary to obtain or generate test data of configurable size to verify this need has been met.
  2. Performance characteristics in error handling characteristics have been considered jointly; so that an occasional error does not completely halt the throughput capabilities of the software. It is possible to move past or set aside a failure case and continue meeting throughput expectations in the case of occasional error.

Security

  1. The software is built on a platform or framework choices which have reasonably well considered security characteristics; the software cooperates with this platform in such a way to generally inherit those characteristics.

(Security could fill books, not one section of a single blog post. For a software product applied primarily to an internal, benign audience, the above is probably sufficient; but for software deployed to the open Internet, or in other cases where hostile actors are expected, appropriate much more substantial security design and implementation is needed.)

Development Process

  1. Intentionally chosen, considered process appropriate for the project
  2. Regular demos or other progress presentations to stakeholders
  3. Regular code review, before (not only after) code enters the mainline of development
  4. (of course many books could be and have been written on development process!)

Internal characteristics

It’s possible to write software which externally does everything it is required to do, but internally is a shambles. Some thinkers imagine that this is the timely and inexpensive way to create software. We have not found that to be the case. Rather, to achieve external quality without overwhelming cost, internal quality is vital. We strive to create reasonably good internal quality without being explicitly asked to do so. Internal quality characteristics often include:

  1. Consistent code style, applied automatically
  2. Linting, applied automatically
  3. Internal and cross project code reuse – general avoidance of duplication
  4. Architectural consistency across portions of a system
  5. Consistent use of suitable platform features; don’t reinvent the wheel, don’t blindly apply techniques from one platform to another

Making sense of Quality for a customer project

This list is long (and could grow much longer). Achieving these things may consume substantial time and effort. At the same time, software projects often arrive at our door already under schedule pressure. To manage this conundrum, we work with customers to consider this list as a default; a list of things that probably should be done, but which a customer might choose to skip some items for schedule or budget needs. For each aspect of quality, a certain amount of minimum attention is needed (and automatically applied by a high-quality software team), but beyond that there is a range of possibility subject to customer priority.

 

Software Demonstration and Project Status: Use Video

At Oasis Digital, custom software projects work at various cadences: weekly, biweekly, or sometimes in variable-length cycles. Regardless, at each interval or milestone it’s important to deliver a comprehensive demonstration and status update for our project customer.

Live demonstrations considered harmful

Unfortunately, the most obvious way to deliver demos and status updates does not work very well:

  • Perform a live, high-stakes demo – Murphy’s Law applies. Systems break during a live demo.
  • Freshly, the first time, making it up as you go along.
  • Think about project status only when asked.
  • Seen only by stakeholders who are able to attend the meeting – often a small subset of the people who care about the demonstration and status

It seems silly to even describe these things, but I’ve seen this poor approach as standard across much of the software development world.

Effective software demonstration and project status delivery

We have refined a much better way to deliver software demonstrations and project status updates. The short answer is, “make a video”. The long answer is to make a comprehensive demonstration and project status update video, deliver it to all interested stakeholders, then have a meeting to discuss the demonstration and states. This results in an easier, deeper, and more thoughtful meeting and also serves stakeholders who can’t attend.

Every demo/status video serves a number of purposes and audiences; so it’s important to cover topics of interest to all kinds of stakeholders, not only the stakeholders most able to attend. At the same time, we don’t recommend creating multiple videos for multiple audiences; that is an unsustainable pace of content production, it takes too much time away from the core work of creating quality software.

Make one medium-length video per cycle (week/biweekly/whatever) to address:

  • Demonstration
  • Project progress summary
  • Upcoming work
  • Key open questions
  • Interesting or important technical details

In this way, each video is of value to both “local” stakeholders (the specific customer team managing the project from day to day) and broader stakeholders across a customer organization.

Next, the nitty-gritty of what goes in to a such a video and how to make it. The agenda should go roughly in this order.

Introduction / Title Slide

Files (including video files_ tend to be misclassified, mislabeled, and misplaced. Someone might open up your demo/status video and not know anything about what’s inside. Therefore, always start with a title slide. That slide should include:

  • Name of the project
  • Name and logo of the customer organization the project is for
  • Date (sometimes just month and year, for slower-paced projects)
  • Name and title of the person making this video (speaking)
  • Name, URL, and logo of the company working on the software (for us, “Oasis Digital”)

While the title slide is visible, briefly introduce yourself. You have only a few seconds of viewer attention; the slide and your introduction should last 10 seconds or at most 15, before you cut to the next section.

Still video is a waste of bandwidth, and drives viewers away. Never let the video stay still while you talk for more than a few seconds.

Demonstration

After that brief introduction, jump right into the demonstration. If you learn only one thing about effective demonstrations, here it is:

Get to the payoff fast.

Don’t wander through a long buildup in which only the most dedicated viewer can reach the important part. Show the payoff, the most important bit, within the first few minutes. Then, go back and explain the rest of the story to give a comprehensive demonstration of use cases.

Your demonstration should bring the viewer through one or more use cases relevant to the work underway. Through these use cases, remind the viewer of the overall purpose and functionality of the software project, and point out the new and changes parts, showing progress.

Demonstrations tend to go wrong, or to waste a lot of time, by default. To produce a quality demonstration:

Practice.

Yes, practice. Jot down a terse outline of what you plan to demonstrate, and practice it a couple of times (with the video recorder running) to get familiar with exactly what will happen. If you see anything urgent to fix while making these practice attempts, you might stop and fix it right then. Then once your practice demo goes well, record the real demo.

In a demonstration of a user interface, text and UI elements be readable. We get the best results by sizing the software and recording a “stage” of 1280×720 pixels. A video that size can easily be played back in a non-full-screen window on a typical computer. If your software under demonstration can’t be used at the small window size (i.e. screens that really only work at 1920 resolution), make sure to boost font sizes.

(Some stakeholders, including quite important ones, might only have an opportunity to watch your demonstration video on their cell phone! Think about font and other element sizes accordingly.)

Lastly, create a demonstration you can be proud of. If your demonstration went badly, discard that recording and do it over. If you have been keeping your demonstrations tight, it won’t cost much time if you occasionally have to discard and try again.  If your demonstration is so long that starting over is unthinkable, make shorter demonstrations more often.

Project status and management update

After demonstrating progress on the software, provide an update on the project. We heartily recommend the following order:

  1. Review at what has been done since the last update; positive progress
  2. Preview at what is coming up next; anticipated progress
  3. Discuss upcoming key questions or issues that could delay or prevent progress

Point 1 is especially important and easily overlooked. We have had projects which were objectively going extremely well: delivering a pile of valuable functionality every week for years on end. But looking back, it’s easy to get in a meeting rut – the tone of a project can be ruined by an inadvertent meeting focus on only what is going wrong. Therefore, before discussing what is coming up in what might go wrong, always briefly summarize what has gone well.

The details of how to show status and upcoming work vary by your methodology and toolset. We most often use Jira, and to talk about these things by scrolling, clicking, and talking about an Agile Board in Jira, often supplemented by a Dashboard. You can do the same with other software, or even with a manual project management system.

Obstacles and questions

Having shown visible progress in the demonstration then talked about project status, you now have the viewers’ attention to deal with challenges. Most likely any obstacles or questions are connected to issues in your project management tracking system; so click back through the relevant ones and discuss these things. Make sure to show the relevant part of the software and the relevant bits in the project management software. (Reminder – never more than a few seconds of still video with just a person talking.)

We have found that our recap of obstacles and questions on video, can be very helpful to our customers representatives. They can show the video to other people in their organization who might be able to help with the obstacle. They can listen as well as read – some people enjoyed listening more than reading. They can arrive for a live meeting, already having thought about the questions and ready to answer.

Technical

The last major section of a demo/status video should dig into any interesting or important technical aspects. Here is the chance to show an IDE or source control tool instead of just the running software or Jira board. Most likely the technical bits worth discussing will concern either recently completed features or features coming up shortly, but sometimes a broader topic might warrant attention.

In our experience, digging into the important technical details can also support rapport and credibility with more kinds of stakeholders. Every organization contains a mix of people most responsive to project management, and others most responsive to technical depth.

Closing

As your video ends, flip back to the title slide and thank the viewer for their attention. As hard as you may have worked (more than the length of the resulting video, sometimes much more), your viewer has also dedicated their limited time to watch. Thank them.

Video and audio production tips

Surprisingly, often the most important aspect of video production is audio. You need a quiet room and a decent quality microphone. The former can be hard to achieve in a busy crowded workspace, but it’s worth the effort. Hide in a conference room. Get a coworker to stand guard at the door.

An amply good microphone costs well under $100. We’ve had good results with various types of headsets (but read more about that later), with Blue Snowball microphones, and with a popular Audio Technica model. All of these are quite inexpensive. Any of them are vastly better than trying to use a laptop’s built in microphone.

Next, screen video. You’ll need appropriate screen video recording software, and you will need to master its configuration. We recommend:

  • ScreenFlow, on OSX
  • OBS, on Windows

Video is about more than just the screen though. If you’ve made it this far into this post, you are ready for perhaps the most important advice of all:

Show your face

A demo/status video is not only about information delivery, it is also about personal connection. Humans are hardwired to connect with other humans while looking at their face. Therefore your face should be visible in the video. Both of the software packages above can easily show your face in a corner of the screen. Do so.  (Back to the headset idea – a headset can provide excellent audio pick up, but then you will be wearing a headset in the video. Therefore the headset is not the best solution for this use.)

Video of your face means you need a camera. Most laptops have an amply good camera built-in (but sit your laptop on a stack of books or something handy – so that the laptop camera is not looking up your nose!). Or add an external webcam (< $100) atop an external monitor for better results.

Speaking of cameras, cameras detect light. Rearrange the lighting in your space (or add a $30 lamp) to get some light on the front of your face during your video recording. Your eyes should not be in shadow.

If your recording software supports it (both of the above mentioned packages do), add a “bug”, a term of art for a partially transparent logo in the corner of the screen. For example, if you decide to put your face in the upper right, then the lower left of the screen could contain your company logo at 50% transparency. A video is a branding opportunity in addition to an information communication opportunity.

Finally, reread the advice earlier in the demonstration section, about font and screen recording sizes. Then read them again. 🙂

Feedback wanted

We have worked out the advice here over years of various attempts to communicate demonstration and status information well. But we surely have much more to learn, and appreciate any feedback readers send. Thanks for getting this far, and good luck in your demonstrations and meetings.

Product Development Launch – Default Software and Practices Stack

Context

Here at Oasis Digital, some of our projects are (approximately) “green field” product development launches. The scope of such a project typically includes some CRUD-like features, but also a complex-behavior feature or two. The effort typically lasts a few weeks or at most a few months, after which work is transitioned to customer developers (or occasionally to longer-term ongoing work here).

During a product development launch, we typically demonstrate:

  • Key goals are around user experience, UI development, etc
  • Key use cases of a system
  • Working software, sufficiently deployable for demonstrations
  • Feasibility and suitability of a technology stack, client and server side

Importantly though, during such a launch effort the long-term viability of the underlying customer vision is not yet fixed nor proven. Rather, a product launch refines the vision and proves the potential value.

Executing a product launch

For the reasons above, it is vital that we execute a product development launch expediently. The process is typically something like so.

  • Understand the vision and goals
  • Collaboratively define some key use cases, and key user experiences
  • Defer as much complexity as possible, outside of these key use cases; don’t let the development launch turn in to just a planning effort
  • Choose off-the-shelf tooling to facilitate quick implementation
  • Define key screen flows for the use cases
  • Consider what data appears on each screen (report, integration, etc.), and the flow of data through the system
  • Define an initial “schema of the system”, iteratively through the launch effort
  • Work on an iterative cadence so that we can get through at least several significant iterations during the short project duration

All of that is just context though; what I really want to talk about here is our default software stack for launching a fresh new project. These are just defaults; they often vary by the needs of a specific project, customer, deployment context, etc.

Client / UI

As of 2017, we generally default to a single page web application powered by Angular. While we also work in React and other tools, Angular is where we have the greatest shared experience (from extensive development work, as well as from teaching Angular Boot Camp) and therefore the greatest immediate collaborative productivity.

Angular is also the technology area where we innovate most. We use it for many projects, we train on it, we follow its development closely, we participate in open source. We attend and sponsor conferences. We are connected with the Angular community.

At the same time, customers coming to us for a product launch are typically most interested in seeing a working user interface that demonstrates their vision. Therefore the greatest share of our work in a product development launch is in the user interface.

Server

Because typically our time is focused primarily on user interface/client-side work, it is important to have a set of highly effective tools with which we can execute well-understood server-side APIs very quickly. Therefore, we default to:

  • Java
  • Spring Boot
  • Spring JPA / HIbernate
  • Various other ancillary related libraries and tools
  • A transaction scripting approach for the handful of complex use cases in a bunch effort

These tools are, perhaps to a 2017 eye, somewhat boring. But they are boring because they are well proven, they work. They very rarely yield, within the scope of initial development, any significant obstacles to delivery. That makes them very well suited for a short-duration effort.

Because these tools are so well proven, and because they permit a mostly declarative implementation approach, the resulting small code base warrant little automated testing at the beginning.  We don’t need tests to show that this stack can correctly implement a RESTful API; if it had any trouble doing so, we would replace the stack, not nitpick it with tests.

(While Java is the typical default, we also frequently use Node and related libraries instead; there is a trade-off here between less mature tools, versus the payoff of using more similar technology between client and server code.)

GraphQL

Sometimes things are not quite as boring as they seem though. If the data to be fetched is complex, we typically pick up GraphQL to slash the code quantity and development time for complex data fetching. Data volumes are usually modest during a short-term launch effort, so a straightforward lazy fetching approach via GraphQL resolvers (which go by another name in some implementations) does the job with little effort. This sometimes results in “N+1” database query operations – a problem to be solved later in development, once the scaling and performance attributes are understood.  GraphQL provides a means to do those optimizations, which we defer until they are needed.

Database

At the database layer, we innovate the least. We typically recommend a common and well-proven relational database. Our default is PostgreSQL, although sometimes customer deployment needs may result in MS SQL Server or another RDBMS.

Product development launch efforts are about speed, so we don’t the database schemas by hand. We define data structures in program code then use the tooling (for example, a JPA implementation) generate the schema. Data migration is also generally not an issue in a short-term launch effort; those come into play in a longer lived project which goes in production with data to preserve across versions.

Deployment

Large, long-term software projects will end up with specialized operations experts who shepherd them through critical infrastructure – but this post is about short product launch projects. These projects need to be made visible for review, demonstration and so on, long before the organizational wheels can turn for serious deployment infrastructure.

Therefore, we typically simply deploy the software for demonstration, to a cloud server instance of some kind (AWS, Google Cloud, Digital Ocean, etc), with minor scripting or tooling to automate deploying new versions frequently (sometimes even at every commit) during development. This is not scalable, and not nearly as automatable as more robust solutions, but it is a perfect starting point for something to put in place right away.

Quality

During a short-duration project effort, we write code quickly, but still keep a close eye on quality. Writing good code typically results in faster progress, beyond the timescale of the day or two.

What about the rest of the tools and practices?

Reading back over this description of the choices we make to launch an effort quickly, you might get the impression that we don’t deploy modern techniques, that we haven’t heard of all of the latest (or decades worth) of buzzwords. On the contrary, we have a full array of additional techniques to apply as a project grows in scope, size, and duration.

Micro Services

For certain projects, a micro service architecture produces numerous benefits. But even the esteemed software architect Martin Fowler suggest starting with a monolith: https://martinfowler.com/bliki/MonolithFirst.html

Unit testing

During a product launch, much of the code is boring ordinary use external libraries, which needs very little unit testing. But a project that grows beyond the initial launch will need lots of unit testing around any logic of complexity or interest.

API testing

As APIs become more complex, they warrant thorough test coverage – so a project that grows will get that coverage.

E2E Testing

A product launch effort typically yields a modest number of screens, undergoing rapid change, therefore unsuitable for browser-based end to end automated testing. Therefore, we skipped that during the launch effort.

We don’t forget though – here at Oasis Digital we are very big fans of automated E2E testing, and have seen it pay off on a daily basis, for most any project that lives more than a few months.

NoSQL

NoSQL can solve a great number of problems, and we recommend this type of data store when it is needed. This rarely occurs in the first month or two of the project when the vision and user experience are still being understood.

CQRS / DDD / ES

We have used these techniques extensively, as you can read about elsewhere on our blog. But we mostly set these skills aside during a fast product launch, these are things that pay off its scale but which can make it hard to get to scale as they are allowed to consume too much time early in a project.

Planning and Methodology

During a short launch effort, planning happens primarily via a whiteboard, or spreadsheet, or similar tool. If the values proven in vision works, a project may grow large enough to warrant more complex planning and project management.

Issue tracking

Although we work extensively with issue tracking technology (our sister company builds add-on products and provide services around Atlassian Jira), during a product launch effort our issue tracking approach is intentionally very lightweight. Issues that won’t get attention during the launch effort, are simply listed somewhere tersely. Issues that need attention right away, typically get attention right away, or at are tracked in some lightweight manner. A product launch effort that lasts only a few weeks to a few months might or might not use a “real” issue tracker in that short time, while an effort that grows to a long-term project will use one extensively for tracking, planning, support, etc.

Much more

This is just a short sampling of practices and how they may apply differently at the beginning of a short effort versus late into a large one.

How we use Git, and why

Here at Oasis Digital, we use Git for source control for nearly all of our projects. There are numerous different ways to use Git, and after many projects we have evolved on a set of effective practices. We have found the approach described here works very well for almost everything we do, though of course as a consulting organization we sometimes adjust things to meet a specific customer need.

Why Git?

Ubiquity

A decade ago, distributed source control arrived suddenly in the mainstream, after years as a niche segment. There were a number of contenders, but Git “won” by a large margin. Today Git is essentially the default choice, the powerful choice, the ubiquitous choice.

Incidentally, Git’s endless flexibility is a major reason it won the race. Various other competitors typically had better ergonomics, easier adoption, easier understanding… but less flexibility. Flexibility means that organizations large and small can use it in a way that meets specific needs.

Technical excellence

Unlike some other systems, we have never lost a line of code due to a defect in Git. Further, we have never needed a permutation which is fundamentally impossible with Git. We have found that, within the confines of our project sizes, it scales extremely well. (Though see a section at the end about limitations in this area.)

Multiplatform

Our projects are often developed across Linux, Windows, and Mac. Git works very well across all of them. Notably, it offers solutions (rather than a “head in the sand”) for differences around line endings among platforms. Yes, it is inconvenient to deal with line ending differences. Git has the tools to do it and get good results, without trying to pretend that one platform is another.

Distributed

Git can operate without a network connection, and more importantly, it operates locally at the speed of a local machine rather than at the speed of a potentially overwhelmed, faraway server. 90% of Git operations are completely local. This is an enormous benefit during daily use, though it has a downside – more risk of a developer forgetting to push code to a server. We handle that concern so well in other ways (project team discussions) that it has never caused difficulty.

Ecosystem

Due to ubiquity, Git has spawned a vast ecosystem of related tools. Nearly every editor and IDE understands Git. Nearly every build or continuous integration mechanism understands it. Nearly every code review tool understands it. There are numerous graphical Git tools for every platform, it is not dependent on any one vendor or team to continue producing quality results.

How we use Git

Use it as an expert

As with most other tools we use, we are committed to expert level mastery, not muddling through. Oasis Digital developers learn to understand the fundamental Git data model of a tree of commits each containing a tree of files. We learn the essential operations and common variations. We learn to understand what the commit tree looks like, what we wanted to look like, and to choose the right command to get from one state to the other. We are “source control nerds”.

We believe this makes sense because source control is not merely an ancillary tool; this is because change management is deeply fundamental to robust long-lived systems. Source control is not an inconvenience, it is an accelerator.

We to treat (portions of, read on) the Git commit tree as putty in our hands.  This is an intentional trade-off, versus the notion of using a restricted subset of Git, but we believe for our use cases is the right trade-off.

Develop on branches

We develop on branches, not on master. In all but the smallest projects, branches go through review and discussion before landing on master.

We use many branches, large and small. We don’t pointlessly mix unrelated changes in the same branch. Git switches among and manages branches almost instantly, so the cost of using additional branches is nearly zero.  We have found that the mental overhead of managing more than one branch using a tool, is much less than the mental overhead of juggling multiple unrelated changes in the same branch, a phenomenon we see regularly when developers use tools which make branching difficult or slow.

Branching model – varies

We manage those branches differently, depending on the needs of a project:

  • small branches, directly off master
  • shared branches, as needed
  • release branches, to support old releases
  • development branches
  • trunk-based development, with small or large branches

 

Master always works

Master (and for more complex projects, certain other branches) always works. Code is reviewed and tested before going on master, not only after. If you read and adopt only one bit from this whole page, this is the most important. Review and commit and make the code good before it goes on master, don’t put junk on master and hope for a drive-by review-fix later.

Master strictly improves

Because we test and review and fix code before it goes on master, master (again, for complex projects, certain other branches) strictly improves. Each master commit adds at least one improvement or fixes at least one problem, without making the software worse. Of course we do not reach the standard perfectly every time, but this is the standard we aim for, and we reach it most of the time.

Immutable master; mutable development branches

Except in rare cases, once commits are on master we treat them as permanently immutable. They form a long-term record of the history of the project. They are a curated, comprehensible telling of the story of the development.

The work performed on branches, is iterated repeatedly on branches. Sometimes it is squashed and often it is rebased. Only when it is ready (to act as a strict improvement to master) does he get a final review, squash, rebase, merge. (As with everything else, except for certain special cases.)

Incidentally, this means that we very rarely have a merge commit on master. The long-term history of our development efforts are easy to read, as a mostly straight-line of medium-sized (not too large, not too numerous) commits.

Most of our developers on most of our projects are not even able to push to master – and are hardly affected by this restriction and daily work. Even with this process, a typical developer will have a commit land on master between one and a few times per week – Depending on numerous factors around how fine-grained the work is divided in the project.

Commit early, commit often, commit anything, push

The corollary of our high bar for work that goes on master, is an extremely low bar for work that can go on to a branch and be pushed. We rarely go to sleep with uncommitted on pushed code. When we need another developer to look at our work in progress, we commit it and push it as work in progress.  When a new developer starts, they often commit and push code to a branch the first day. By keeping this bar low, we provide the maximum opportunity for new developers to become fluent with source control and to use our tooling as a communication mechanism (“here is some code, when you look at it for me?”) Early and often.

Squash out the mess

There is an inevitable, human tendency to be careful with anything that will be “part of your permanent record”. This is built into all of us, though it seems to be more pronounced in some cultures than others. But by combining our very low bar for what can be committed and pushed, with a policy of squashing minor steps together to yield an aggregated, high-quality change, we moderate this tendency very effectively.  Code can be pushed any time for just a casual look. It won’t be part of the permanent history until it is good enough. No fear.

Left to its own, the tendency to be careful would force developers to “raise the bar” of what gets committed or pushed. A rational developer (regardless of policy) in such an environment must be careful about what they put in source control. We have found that this inevitably pushes developers to do more of their work without source control. For example, by shuffling files locally outside of the source control tool, by leaving work on pushed, etc.

Source control is a very powerful tool, and a low-bar-push-often-squash-rebase approach enables that power into the hands of every developer, individually as well as all developers working together.

But squash carefully

Still, squash and rebase are potentially troublesome features. So when we squash, we do not squash arbitrarily. We aim to squash groups of related changes, worked on along a path to achieve a specific feature, but not to squash unrelated work. One minor caveat though: to tell the story of the development in the most comprehendible way, the permanent master commits should be both of medium-size and medium number. Therefore we sometimes squash together a set of individually very small changes, when we can do so without creating difficulties.

Use a GUI

Many very skilled developers are fond of command line interfaces, for many good reasons. The Git command line interface, while slightly problematic in some ways, is extremely powerful for making changes. However, it is not particularly well-suited for understanding a Git commit tree, especially on a busy project with numerous concurrent branches. For this reason, when working heavily with Git we always use a graphical interface at least to visualize and understand the commit tree, even when we use the command line for manipulation. There are various high quality choices for GUIs on each platform, and on all platforms the built-in “gitk” interface is very helpful in a pinch to understand the commits – without any additional tool selection, download, or install.

Work together, learn, teach

We work together regularly. A group of developers working on the same project will set in a collaboration space together (inviting remote members online) at a giant screen attached to a fast development machine. We work on the hard problems together, we refer you tricky code together. We learn and teach together, how to use all of our tools, including Git.

Caveats, limitations, and context

Small, medium, and large, but not huge

Git, and the practices here are useful on our projects of small, medium, or largish size. Beyond a certain size, many of the practices would still work but Git becomes awkward. For example, some of the largest software operations in the industry (Google, Facebook) use a “monorepo” with all of the code for all of their projects and nearly all dependencies thereof. A different kind of toolset is needed for such extremely large repositories.

Source code, but few binary assets

Further, we keep small, medium and somewhat large collections of source code in Git; when our work has occasionally involved many large media assets, we have used other means to track them. Git specifically, is probably not a suitable choice for (for example) a large-scale game development effort with hundreds of gigabytes of binary assets.

Skill and understanding needed

Here at Oasis Digital, we are not in the “get a bunch of people to grunt out some code” business. Instead we are in the “recruit and hire good people, and help them become great” business. This is necessary for us, because of the premier customers we serve, and because we not only develop, we also teach. We are all about deep mastery of tools. Our Git approach fits very well with this context.

We’ve sometimes heard the objection to Git, that it is difficult for less skilled developers to understand fully and use correctly. This is possibly true, but it is not that important to our context – and we think it is not that important at all. Anyone with the capability of understanding software enough to develop quickly, efficiently, and with quality, certainly has the capability to master Git.

With great power…

Even with good understanding, Git is still relatively easy to misuse. It offers numerous very powerful commands, it is like a chainsaw and a machine shop. It is not like a set of safety scissors.

We have found that this power punishes developers who don’t look at a graphical representation of the commit tree regularly. It rewards developers who do.

Syntax

Even keeping in mind the power of the Git command vocabulary, it’s CLI syntax leaves much to be desired. It shows clear signs of having evolved awkwardly, and had early accidental complexity retained for backward compatibility.

However, we have found that developers who have the discipline to look at a graphical representation of the commit tree, generally gain a better understanding of the operation they want to perform and are more likely to perform correctly even when using the CLI.

Beware neutered GUIs

Some Git GUI products aim to simplify Git “for the masses”. In the process they managed to do a terrible job of the most essential function, visualizing and understanding the commit tree. Beware of a tool which resists showing you this tree. Even following our practices described here (where master will be a straight line most of the time), it is necessary frequently to understand a bunch of concurrent possibly tangled branches when a project is under heavy many-developer work. The right GUI makes this fast and easy.

Churn

I think the Git community is now on the third major mechanism for subproject use. We avoid subprojects as much as possible, but experience some frustration when we must use it.

Revisionist history

Our approach clearly and intentionally yields a revisionist, streamlines history. As described above, the resulting history is crafted for understanding, and for (rarely) backing out the final set of related changes for some work rather than miscellaneous partial changes. Still, some people are uncomfortable with the revisionism, and prefer a full record of every step along the way. This is a trade-off, where we have found the revisionist approach has more upside than downside.

Discipline needed

Even with the restrictions, it still requires discipline, to avoid creating a branch from a spot in history (something not on master) that will change in the future.

The Oasis Digital Spectrum of Services

In the early years of Oasis Digital, we offered exactly one service: outsourced software development contracting. Since then, we’ve expanded to a spectrum of related services. The result doesn’t fit in an “elevator pitch”, but it meets the needs of customers much better. Our “spectrum of services” will be more clear on our main website and elsewhere over time, but this blog post explains it as succinctly as we can. Ranked in approximate order from smallest to largest, we offer:

1) Free technical content

Our expert developers/trainers speak and write about relevant technical topics, and the results are almost always freely available: talks on our YouTube channel, posts on our blog, our twitter accounts, and so on. Further, we attend various conferences and are always happy to speak to people who come talk with us there.

2) Tickets to public classes

We teach on several technical topics, most prominently Angular Boot Camp. Class tickets are a great fit for an individual developer or small group, who can purchase, then attend online or in person around the US and occasionally around the world.

3) Private training classes

To train a whole team at once, we offer private classes, both online and in-person. Private classes can also easily be extended with add-on days of customized consulting and training, for customers looking for added value. Some customers engage us for a series of private classes.

4) Application assessment

An application assessment is a short consulting engagement (typically 1 week, with 1 or 2 of our experts) in which we meet with a new customer in-depth, to assess an application (or understand the vision of an application). The assessment includes a written report, and (if needed) a proposal for future recommended work.

5) Ongoing expert assistance

Oasis Digital can provide ongoing expert assistance, in a retainer-like arrangement. We regularly meet with your developers, to help with questions, issues, code review, design guidance, and implementation of key areas. We have different packages depending on how much assistance each month you need.

6) Agile product development

In an Agile development project, a Oasis Digital works with a customer on an iterative basis, prioritizing features (“stories”), responding to changes and guidance. Such a project is especially amenable when the product vision is established but feature needs are still evolving. An Agile project is straightforward to contract and price (based on the team size), and can start quickly then last as long as needed. We often begin Agile projects with Oasis Digital developers, then gradually integrate customer developers over time for an eventual handover. The project style is also well suited when Oasis Digital is joining an existing effort already in progress.

7) Scoped product development

In a scope product/project, the features needed are worked out in advance (a scoping effort might be part of an application assessment, for example) so that Oasis Digital can provide a price and schedule to achieve that known list of features and surrounding goals. This style of project is decidedly less agile (changes and additional features are generally implemented after successful delivery of the initial scope), but can also ultimately be more efficient – our experts are especially adept at skipping directly to a high quality approach, avoiding false starts and reducing rework during iteration.