Product Development Launch – Default Software and Practices Stack

Context

Here at Oasis Digital, some of our projects are (approximately) “green field” product development launches. The scope of such a project typically includes some CRUD-like features, but also a complex-behavior feature or two. The effort typically lasts a few weeks or at most a few months, after which work is transitioned to customer developers (or occasionally to longer-term ongoing work here).

During a product development launch, we typically demonstrate:

  • Key goals are around user experience, UI development, etc
  • Key use cases of a system
  • Working software, sufficiently deployable for demonstrations
  • Feasibility and suitability of a technology stack, client and server side

Importantly though, during such a launch effort the long-term viability of the underlying customer vision is not yet fixed nor proven. Rather, a product launch refines the vision and proves the potential value.

Executing a product launch

For the reasons above, it is vital that we execute a product development launch expediently. The process is typically something like so.

  • Understand the vision and goals
  • Collaboratively define some key use cases, and key user experiences
  • Defer as much complexity as possible, outside of these key use cases; don’t let the development launch turn in to just a planning effort
  • Choose off-the-shelf tooling to facilitate quick implementation
  • Define key screen flows for the use cases
  • Consider what data appears on each screen (report, integration, etc.), and the flow of data through the system
  • Define an initial “schema of the system”, iteratively through the launch effort
  • Work on an iterative cadence so that we can get through at least several significant iterations during the short project duration

All of that is just context though; what I really want to talk about here is our default software stack for launching a fresh new project. These are just defaults; they often vary by the needs of a specific project, customer, deployment context, etc.

Client / UI

As of 2017, we generally default to a single page web application powered by Angular. While we also work in React and other tools, Angular is where we have the greatest shared experience (from extensive development work, as well as from teaching Angular Boot Camp) and therefore the greatest immediate collaborative productivity.

Angular is also the technology area where we innovate most. We use it for many projects, we train on it, we follow its development closely, we participate in open source. We attend and sponsor conferences. We are connected with the Angular community.

At the same time, customers coming to us for a product launch are typically most interested in seeing a working user interface that demonstrates their vision. Therefore the greatest share of our work in a product development launch is in the user interface.

Server

Because typically our time is focused primarily on user interface/client-side work, it is important to have a set of highly effective tools with which we can execute well-understood server-side APIs very quickly. Therefore, we default to:

  • Java
  • Spring Boot
  • Spring JPA / HIbernate
  • Various other ancillary related libraries and tools
  • A transaction scripting approach for the handful of complex use cases in a bunch effort

These tools are, perhaps to a 2017 eye, somewhat boring. But they are boring because they are well proven, they work. They very rarely yield, within the scope of initial development, any significant obstacles to delivery. That makes them very well suited for a short-duration effort.

Because these tools are so well proven, and because they permit a mostly declarative implementation approach, the resulting small code base warrant little automated testing at the beginning.  We don’t need tests to show that this stack can correctly implement a RESTful API; if it had any trouble doing so, we would replace the stack, not nitpick it with tests.

(While Java is the typical default, we also frequently use Node and related libraries instead; there is a trade-off here between less mature tools, versus the payoff of using more similar technology between client and server code.)

GraphQL

Sometimes things are not quite as boring as they seem though. If the data to be fetched is complex, we typically pick up GraphQL to slash the code quantity and development time for complex data fetching. Data volumes are usually modest during a short-term launch effort, so a straightforward lazy fetching approach via GraphQL resolvers (which go by another name in some implementations) does the job with little effort. This sometimes results in “N+1” database query operations – a problem to be solved later in development, once the scaling and performance attributes are understood.  GraphQL provides a means to do those optimizations, which we defer until they are needed.

Database

At the database layer, we innovate the least. We typically recommend a common and well-proven relational database. Our default is PostgreSQL, although sometimes customer deployment needs may result in MS SQL Server or another RDBMS.

Product development launch efforts are about speed, so we don’t the database schemas by hand. We define data structures in program code then use the tooling (for example, a JPA implementation) generate the schema. Data migration is also generally not an issue in a short-term launch effort; those come into play in a longer lived project which goes in production with data to preserve across versions.

Deployment

Large, long-term software projects will end up with specialized operations experts who shepherd them through critical infrastructure – but this post is about short product launch projects. These projects need to be made visible for review, demonstration and so on, long before the organizational wheels can turn for serious deployment infrastructure.

Therefore, we typically simply deploy the software for demonstration, to a cloud server instance of some kind (AWS, Google Cloud, Digital Ocean, etc), with minor scripting or tooling to automate deploying new versions frequently (sometimes even at every commit) during development. This is not scalable, and not nearly as automatable as more robust solutions, but it is a perfect starting point for something to put in place right away.

Quality

During a short-duration project effort, we write code quickly, but still keep a close eye on quality. Writing good code typically results in faster progress, beyond the timescale of the day or two.

What about the rest of the tools and practices?

Reading back over this description of the choices we make to launch an effort quickly, you might get the impression that we don’t deploy modern techniques, that we haven’t heard of all of the latest (or decades worth) of buzzwords. On the contrary, we have a full array of additional techniques to apply as a project grows in scope, size, and duration.

Micro Services

For certain projects, a micro service architecture produces numerous benefits. But even the esteemed software architect Martin Fowler suggest starting with a monolith: https://martinfowler.com/bliki/MonolithFirst.html

Unit testing

During a product launch, much of the code is boring ordinary use external libraries, which needs very little unit testing. But a project that grows beyond the initial launch will need lots of unit testing around any logic of complexity or interest.

API testing

As APIs become more complex, they warrant thorough test coverage – so a project that grows will get that coverage.

E2E Testing

A product launch effort typically yields a modest number of screens, undergoing rapid change, therefore unsuitable for browser-based end to end automated testing. Therefore, we skipped that during the launch effort.

We don’t forget though – here at Oasis Digital we are very big fans of automated E2E testing, and have seen it pay off on a daily basis, for most any project that lives more than a few months.

NoSQL

NoSQL can solve a great number of problems, and we recommend this type of data store when it is needed. This rarely occurs in the first month or two of the project when the vision and user experience are still being understood.

CQRS / DDD / ES

We have used these techniques extensively, as you can read about elsewhere on our blog. But we mostly set these skills aside during a fast product launch, these are things that pay off its scale but which can make it hard to get to scale as they are allowed to consume too much time early in a project.

Planning and Methodology

During a short launch effort, planning happens primarily via a whiteboard, or spreadsheet, or similar tool. If the values proven in vision works, a project may grow large enough to warrant more complex planning and project management.

Issue tracking

Although we work extensively with issue tracking technology (our sister company builds add-on products and provide services around Atlassian Jira), during a product launch effort our issue tracking approach is intentionally very lightweight. Issues that won’t get attention during the launch effort, are simply listed somewhere tersely. Issues that need attention right away, typically get attention right away, or at are tracked in some lightweight manner. A product launch effort that lasts only a few weeks to a few months might or might not use a “real” issue tracker in that short time, while an effort that grows to a long-term project will use one extensively for tracking, planning, support, etc.

Much more

This is just a short sampling of practices and how they may apply differently at the beginning of a short effort versus late into a large one.

Frameworks and commercial ecosystems

Or, “why we don’t teach Aurelia”

Here at Oasis Digital and its sister company (Expium), we offer training and services concentrated around various languages and frameworks:

  • Angular
  • TypeScript
  • Node
  • The web platform in general
  • JIRA, Confluence, and other Atlassian products (Expium is an Atlassian Solutions Partner)

There are many reason – technical, history, intentional, and accidental – around how we ended up with this set of technologies as our 2017 training and consulting focus.

I was reminded of one key factor today while watching a video from last year of a talk by Rob Eisenberg. Rob is exceptionally sharp, and seems to have a good sense of taste in designing frameworks for developer satisfaction. But I found myself in disagreement over his thoughts around web framework adoption. Rob argues that frameworks like his (Aurelia) are stronger, better choices to build on than frameworks like Angular and React, because first-party training and support services are available for Aurelia from the makers of Aurelia. This initially seems like a compelling pitch, I can see how it would woo some decision-makers. Here is a snippet of one of the slides along these lines, pointing out first party training as an advantage:

But I think ultimately this works out much less well than Rob describes. Why? Because this first party set of training and consulting offerings leave less space for a thriving commercial ecosystem to develop around a framework.

Let’s look at Angular for example. Here at Oasis Digital, we aim to be a leader among many firms around the world, who provide training, consulting, etc. for Angular. Our customers are quite happy with the availability of these services from many different companies; it reduces their risk and means they can shop around for the best fit. Moreover, because Angular has created opportunities for companies like Oasis Digital, it has facilitated a growing commercial ecosystem revolving around the framework. Much the same applies, for example, to React and Vue.js. This is a virtuous cycle. The non-service-offering core team leaves room for others to provide services, which in turn makes it easier and safer for customers to adopt the framework.

(A second example at Oasis Digital’s sister company Expium: Expium focuses entirely on the Atlassian product suite. While Atlassian offers online video training options, Expium’s offerings include things like live human training that don’t compete directly with Atlassian’s offerings. Atlassian enjoys a thriving commercial ecosystem.)

Of course it would be possible for companies like us to offer training and consulting focused on Aurelia. But we don’t want to do that; we like the people responsible for the framework. If we offered services for Aurelia, we would have an inherently competitive relationship with the company behind Aurelia, vying for the same customer opportunities.

This situation applies to various other frameworks and other technical specialties that we could choose to focus on; with so many choices, it inevitably feels wiser to choose those where we can be allied with the core teams rather than in competition with them.

I believe that overall, this is quite important in understanding why some frameworks gain enormous momentum and others do not. Creating this kind opportunity for a commercial ecosystem is an immense competitive advantage to those companies who can offer a framework without needing to build a business directly around it – like Google and Angular.

 

Analysis Projects – expedited, insightful project review

We deliver a wide range of services at Oasis Digital – training, mentoring, development, and plenty in between. Occasionally we do an “analysis project”, in which we thoroughly review a project’s code and related assets, then prepare a written report and discussion of recommendations. Customers typically request such an analysis for the usual business reasons:

  • Acquisition or investment diligence
  • Budget has gone awry
  • New opportunity drives a decision of whether to update or replace a software system
  • As part of a technology “bake-off”
  • Etc.

Although we work across a wide range of technologies, our project analysis work is focused on the technologies that we use most often, such as Angular, React, TypeScript, and Node, and many related technologies. The projects we analyze often have portions which span other technologies, and if they stray too far from areas where we have the greatest expertise, we caution customers that our analysis of those portions will be less deep.

Typical Analysis Project Scope of Work

The scope of work on an analysis project typically looks something like this:

  1. Meet with customer project managers and product owners to understand the business purposes for a project.
  2. Meet with customer developers to understand the technical background and status of the project.
  3. Study source code, documentation and other materials provided by customer.
  4. If there are multiple candidate sets of source code, use comparison tools and try to determine which is the most complete or up-to-date.
  5. To the extent possible, set up a development/test environment to execute the project. For some projects this is straightforward, rarely it proves impossible, usually it is somewhere in between.
  6. Assess use of third-party tools (open-source and commercial).
  7. Prepare a written report of our assessment and recommendations for the project.
  8. Prepare a screen video code review of some critical portions of the source code, if appropriate.
  9. One or more meetings with customer stakeholders (developers, product owners, etc.) to discuss the results.

Deliverables are:

  1. the written report
  2. optional screen video code review
  3. discussion meetings

The scope and deliverables for an analysis project do not include features, bug fixes, or other development work.  We do many projects which deliver those things – but those are not analysis projects,.

Typical analysis project report outline

An analysis report will typically include sections like so:

  1. Introduction and summary of customer situation
  2. Summary of analysis work performed
  3. Analysis of how data is stored and managed in the software as it executes
  4. Analysis of how data is persisted long-term, i.e. how data is stored to a database/files/backend/etc
  5. Listing of third-party code libraries used
  6. Assessment of the currency and future prospects of such libraries
  7. Analysis of coding techniques used in the software, relative to popular or best practices
  8. Commentary on any potential security issues we notice (although as a company we do not specialize in security analysis)
  9. Assessment of the user interface
  10. Assessment of the overall amount of functionality relative to the quantity of source code
  11. Overall assessment of the maintainability and future prospects of the code

Of course the specific outline varies from one analysis to another, and we often have special requests from a customer to look in particular depth at one area or another.

Schedule and Cost

Depending on the size of a project, the duration can vary from days to weeks, and the cost can also vary widely. We generally can quote a price, once we know some statistics about a project, and discuss the project with someone who is familiar with the code.

For example, on a project that uses AngularJS, we would inquire about:

  • How many Javascript files?
  • How many total lines of JavaScript code? Ideally the tool used to count this would skip blank lines and comment lines. For both this line count in the file counts, it is best to count only project code – sometimes these numbers can get artificially inflated by third-party code you are merely using which happens to sit in a library directory in your project.
  • How many template files?
  • How many total lines of template?
  • How many angular directives?
  • How many angular components?
  • How many angular services?
  • How many different third-party JavaScript “widgets” are used?
  • Mention any major add-on libraries that the application uses extensively; for example Lodash.
  • Subjectively, how many distinct screens/pages are there in the sites/applications? Is there a large amount of code driving a small number of complex pages?

Sounds good, how do I buy this?

This is just a blog post, but if you contact us we can discuss your particular project in need of analysis, get a sense of its size, and prepare a service agreement to begin.

Environment-specific modules, services and components in Angular

Sometimes your Angular application needs to be a little bit different depending on the environment. Maybe your local development build has some API services stubbed out. Or you have two production destinations with a little bit different behavior. Or you have some other build-time configuration or feature toggles.

The differences can be on any level: services, components, suites of ngrx/effects, or entire modules.

For example, it could look like this:

The dependency injection and module mechanism in Angular is pretty basic, and it does not seem to offer much to answer such use cases. If you want to use ahead-of-time (AOT) compilation (and you should!), you can’t just put random functions in module definition. I found a solution for doing it one service at a time, but it’s pretty ugly and not flexible enough. The problem does not at all seem uncommon to me, though, so I wanted to describe a nice little trick to work around this.

Sample app

Let’s study this on a sample app. To keep things interesting, let’s make a realistic simulator of various organization hierarchies.

We’ll need a component to tell us who rules the “organization”:

app.component.html
<h1>Who owns the place?</h1>
{{(ruler | async).name}}!
app.component.ts
@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styles: []
})
export class AppComponent {
  ruler: Observable<Ruler>;

  constructor(rulers: RulersService) {
    this.ruler = rulers.ruler;
  }
}

As well as a service:

rulers.service.ts
export abstract class RulersService {
  abstract get ruler(): Observable<Ruler>;
}

On a picture it could look like this:

Now, let’s say we have two environments:

  • Playground, where little Steve would really like to own the place… except that he can only do that when the local bully is not around, and that’s only when he’s eating. To determine the ruler we need to ask the Bully’s mother about his status. So it goes.
  • Wild West, where the rules are much simpler in comparison: It’s sheriff who runs the show, period.

In other words, we need something like this:

So, how to achieve that?

Solution

The solution is actually pretty straightforward. All it takes is a little abuse (?) of Angular CLI environments. Long story short, this mechanism provides different environment file to the compiler based on a compile-time flag.

If you check the documentation or pretty much any existing examples, the environments are typically used to provide different configuration. Just a single object with a bunch of properties. You might be tempted to use it in some functions on module definition, but it’s not probably not going to work with AOT. Oops.

However, at the end of the day it’s just a simple file substitution. You can put whatever you like in the file, and as long as it compiles everything is going to be OK. Classes. Exports from other files. Anything.

In our case, the AppModule can import the RulersModule from the environment. It doesn’t care much what the module actually contains.

app.module.ts
import { AppComponent } from './app.component';
import { RulersModule } from '../environments/environment';

@NgModule({
  declarations: [
    AppComponent
  ],
  imports: [
    BrowserModule,
    RulersModule
  ],
  providers: [],
  bootstrap: [AppComponent]
})
export class AppModule { }

The environments would export it from the relevant “package”. They could have the classes inline, but I prefer to keep them in separate files closer to the application.

environment.playground.ts
export const environment = {
  production: true
};

export { PlaygroundModule as RulersModule } from '../app/rulers/playground/playground.module';
environment.wild-west.ts
export const environment = {
  production: true
};

export { WildWestModule as RulersModule } from '../app/rulers/wild-west/wild-west.module';

Now, the modules:

playground.module.ts
import { RulersService } from '../rulers.service';
import { PlaygroundRulersService } from './playground-rulers.service';
import { BullysMotherService } from './bullys-mother.service';

@NgModule({
  providers: [
    BullysMotherService,
    { provide: RulersService, useClass: PlaygroundRulersService }
  ]
})
export class PlaygroundModule { }
wild-west.module.ts
import { RulersService } from '../rulers.service';
import { WildWestRulersService } from './wild-west-rulers.service';

@NgModule({
  providers: [{ provide: RulersService, useClass: WildWestRulersService }]
})
export class WildWestModule { }

The final trick that makes these work is that there is an abstract class/interface for the RulersService, and AppComponent (or other services, if we had them) only depends on that. Actually, this pattern has been around for a while. This is the old good programming to interfaces.

The same goes for the RulersModule: In a sense it’s abstract too, AppModule doesn’t know or care what concrete class it is. As long as the symbol with the same name exists during compilation.

This sample demonstrates it on the module level, but you can use the same trick for any TypeScript code, be it a component, a service, etc.

I am not sure if such use of environments is ingenious, or insane and abusive. I have not seen this anywhere. However, it does solve a real problem that does not seem to have a solution in the Angular toolbox (and it’s not particularly toxic).

Sample app on GitHub

Check out this sample app at my GitHub repository.

Angular universal / server side rendering

The current state of server-side rendering (so-called “universal”) for Angular is somewhat in flux in mid-2017. There had been an early Angular Universal effort by an outside group, which has now been absorbed into the core Angular team at Google. They are working toward a new release (to become part of an Angular release) which integrates it tightly is a fully supported first-class piece of the Angular tool suite.

Very eager developers, it is possible to use some of these tools now; but it should become easy and mainstream in the coming months. The primary use cases are:

1) SEO

Site/applications which have publicly exposed pages for which search engine optimization is needed, prefer to statically render (and serve) the key SEO pages. Historically this was vital, because search engines did not execute JavaScript on the pages being indexed. However, for at least the last several years, Google and its other top-tier search engine competitors do execute JavaScript on a page, so the SEO use case is not as important as it used to be. Many still prefer to statically render pages for maximum SEO, nonetheless.

2) Progressive Web Applications

This is the current leading edge of aggressively performance web applications for mobile devices. The idea is to statically load the outer “shell” of an application with some initial static content displayed, then replace that initial content with fully dynamic content a few seconds later. That initial load involves the smallest feasible amount of HTML, CSS, JavaScript.

PWA is a compelling idea, but there are practicalities to its appeal:

* PWA is mostly relevant on mobile devices. An optimized Angular application will load very quickly on a desktop machine regardless.

* PWA is most important on down-level devices and networks. It doesn’t make as much difference on those of us sitting in well-networked places on LTE with current generation smart phones – or on fast corporate networks.

* PWA matters most on the first load of a page/application. After that, many of the assets will be cached so the real content will load very shortly after the progressive pre-render.

The coming default

We believe the tooling will eventually work so nicely, that static pre-rendering and PWA become straightforward, or even become the default path for new applications. But keep in mind the practicalities above – for many use cases, pre-rendering and PWA make sense to adopt only when the tooling makes it very straightforward. Developer efforts between now and then probably pay off more if directed toward application functionality and polish.

Angular 4 rc.1 AOT build options – with example projects

From the summer 2016 production release of Angular, most users have treated AOT as a future curiosity. As of late fall 2016 though, many (particularly those working on larger applications) have been much more eager and interested to use AOT. Here at Oasis Digital, we have recently updated our Angular 2+ curriculum to ensure the numerous code examples used in class are AOT-ready.

Although most of our production Angular 2+ work uses the Google-sponsored Angular CLI (which was excellent AOT support), we’ve also been working with various alternative tooling stacks. Some of our customers integrate their Angular applications with broader build processes and are looking for more fine-grained control than they get with the official CLI.

Last week, Angular 4 rc.1 shipped with additional library packaging bundles, FESM and ES2015 FESM. These should support tighter production builds them before, and more easily; the official CLI does not take advantage of these yet (though I expect it will soon), and I was eager to experiment.

The results

https://github.com/OasisDigital/angular-aot-es2015-rollup/tree/master

In this example, the build is performed using:

  • AOT (ngc
  • Rollup
  • Buble
  • Uglify

See the README in the project for a lengthy explanation of how it works and why these tools were chosen. It was mostly straightforward to make this work; the configuration is quite simple. However, as of the beginning of March 2017 there is an important Rollup plug-in which does not yet have the ability to consume the new Angular ES2016 FESM bundles. To work around that, I published a (hopefully temporary) fork of that plug-in, “@oasisdigital/rollup-plugin-node-resolve“.

Another variation

The boring, excellent, proven, and still frequently updated Google Closure Compiler can often produce better results than newer, hip tools. Therefore the following variation/branch:

https://github.com/OasisDigital/angular-aot-es2015-rollup/tree/closure-compiler

…replaces a couple of the tools with Closure Compiler. It uses only:

  • AOT (ngc)
  • Rollup
  • Closure Compiler

With fewer tools, it produces a smaller results. The configuration is slightly more complex (mostly because the Closure Compiler JavaScript port is not quite the same as the Java edition yet), but is still quite manageable.

I have not yet compared this with an even shorter stack (using Closure Compiler for the tree shaking as well), as there are examples around already doing that; but I expect an upcoming enhancement to Closure Compiler will add the “es2015” package field support needed for the ES2015 FESM bundles, once that is in place I am curious as to whether Rollup or Closure (both very respected as excellent tree shaking tools) will produce tighter results.

Why this matters

For projects deployed on an intranet, it’s possible that none of this matters. A very large internal enterprise project might ship a total of 6 MB of compressed JavaScript (hopefully divided across various bundles littered on demand) with the default tooling, or 5 MB with tweaked tooling. That won’t matter across a gigabit network with people mostly using an application frequently (and therefore with the JavaScript mostly in cache.

But not all projects are huge or internal. Angular is also well-suited for medium-to-large projects deployed on the open Internet to a huge number of sporadic users. For these users, who might be on slow connections, saving bytes counts. Faster load times translate to more user engagement. Better production bundling expands the reach of Angular two more kinds of projects.

The above is not even counting mobile; as Angular mobile application development continues to increase, the tightest possible production bundles will matter more and more.

 

The Heart of BI / OLAP is your data

There are plenty of vendors eager with a sales pitch for BI/OLAP projects, eager also to give you the impression that all you need to do is buy their product. This is wrong, perhaps dangerously so, because **the heart of BI / OLAP is your data** and the core challenge is to transform your data into a form where it can be easily **and correctly** analyzed.

The principles of operation are similar regardless of which products you choose. Your toolset will consist of, at minimum:

  • An OLAP tool, with or without a RDBMS behind it
  • An ETL tool, which might be a software product or might be a set of scripts
  • Hardware to run it on, chosen and configured to server an analytic load well;
    this could be hardware you own, or a SaaS or cloud offering
  • Configuration thereof

But this analysis understates perhaps the most important part. ETL needs extensive configuration (for moderate cases and powerful ETL software, and some luck) and more likely, carefully crafted software to transform business data from whatever form it lies in, to a suitable shape for analysis.

Of course I cannot help but mention that Oasis Digital works on such projects; but regardless of us, effective OLAP involves an astonishing amount of “getting your hands dirty” digging in and understanding the precise meaning of bits of data flowing out of one or more (perhaps many more) operational systems. This work is arguably so unpleasant that it leads many organizations to skip OLAP, but that actually misses the point. The work can’t be avoided, without also avoiding the full value that could otherwise be obtained from a correctly populated analytical data store.

Alternatively, the work could be skipped, if you don’t mind incorrect analytical data and incorrect conclusions drawn from it. This doesn’t seem like a strategy for success.

 

How we use Git, and why

Here at Oasis Digital, we use Git for source control for nearly all of our projects. There are numerous different ways to use Git, and after many projects we have evolved on a set of effective practices. We have found the approach described here works very well for almost everything we do, though of course as a consulting organization we sometimes adjust things to meet a specific customer need.

Why Git?

Ubiquity

A decade ago, distributed source control arrived suddenly in the mainstream, after years as a niche segment. There were a number of contenders, but Git “won” by a large margin. Today Git is essentially the default choice, the powerful choice, the ubiquitous choice.

Incidentally, Git’s endless flexibility is a major reason it won the race. Various other competitors typically had better ergonomics, easier adoption, easier understanding… but less flexibility. Flexibility means that organizations large and small can use it in a way that meets specific needs.

Technical excellence

Unlike some other systems, we have never lost a line of code due to a defect in Git. Further, we have never needed a permutation which is fundamentally impossible with Git. We have found that, within the confines of our project sizes, it scales extremely well. (Though see a section at the end about limitations in this area.)

Multiplatform

Our projects are often developed across Linux, Windows, and Mac. Git works very well across all of them. Notably, it offers solutions (rather than a “head in the sand”) for differences around line endings among platforms. Yes, it is inconvenient to deal with line ending differences. Git has the tools to do it and get good results, without trying to pretend that one platform is another.

Distributed

Git can operate without a network connection, and more importantly, it operates locally at the speed of a local machine rather than at the speed of a potentially overwhelmed, faraway server. 90% of Git operations are completely local. This is an enormous benefit during daily use, though it has a downside – more risk of a developer forgetting to push code to a server. We handle that concern so well in other ways (project team discussions) that it has never caused difficulty.

Ecosystem

Due to ubiquity, Git has spawned a vast ecosystem of related tools. Nearly every editor and IDE understands Git. Nearly every build or continuous integration mechanism understands it. Nearly every code review tool understands it. There are numerous graphical Git tools for every platform, it is not dependent on any one vendor or team to continue producing quality results.

How we use Git

Use it as an expert

As with most other tools we use, we are committed to expert level mastery, not muddling through. Oasis Digital developers learn to understand the fundamental Git data model of a tree of commits each containing a tree of files. We learn the essential operations and common variations. We learn to understand what the commit tree looks like, what we wanted to look like, and to choose the right command to get from one state to the other. We are “source control nerds”.

We believe this makes sense because source control is not merely an ancillary tool; this is because change management is deeply fundamental to robust long-lived systems. Source control is not an inconvenience, it is an accelerator.

We to treat (portions of, read on) the Git commit tree as putty in our hands.  This is an intentional trade-off, versus the notion of using a restricted subset of Git, but we believe for our use cases is the right trade-off.

Develop on branches

We develop on branches, not on master. In all but the smallest projects, branches go through review and discussion before landing on master.

We use many branches, large and small. We don’t pointlessly mix unrelated changes in the same branch. Git switches among and manages branches almost instantly, so the cost of using additional branches is nearly zero.  We have found that the mental overhead of managing more than one branch using a tool, is much less than the mental overhead of juggling multiple unrelated changes in the same branch, a phenomenon we see regularly when developers use tools which make branching difficult or slow.

Branching model – varies

We manage those branches differently, depending on the needs of a project:

  • small branches, directly off master
  • shared branches, as needed
  • release branches, to support old releases
  • development branches
  • trunk-based development, with small or large branches

 

Master always works

Master (and for more complex projects, certain other branches) always works. Code is reviewed and tested before going on master, not only after. If you read and adopt only one bit from this whole page, this is the most important. Review and commit and make the code good before it goes on master, don’t put junk on master and hope for a drive-by review-fix later.

Master strictly improves

Because we test and review and fix code before it goes on master, master (again, for complex projects, certain other branches) strictly improves. Each master commit adds at least one improvement or fixes at least one problem, without making the software worse. Of course we do not reach the standard perfectly every time, but this is the standard we aim for, and we reach it most of the time.

Immutable master; mutable development branches

Except in rare cases, once commits are on master we treat them as permanently immutable. They form a long-term record of the history of the project. They are a curated, comprehensible telling of the story of the development.

The work performed on branches, is iterated repeatedly on branches. Sometimes it is squashed and often it is rebased. Only when it is ready (to act as a strict improvement to master) does he get a final review, squash, rebase, merge. (As with everything else, except for certain special cases.)

Incidentally, this means that we very rarely have a merge commit on master. The long-term history of our development efforts are easy to read, as a mostly straight-line of medium-sized (not too large, not too numerous) commits.

Most of our developers on most of our projects are not even able to push to master – and are hardly affected by this restriction and daily work. Even with this process, a typical developer will have a commit land on master between one and a few times per week – Depending on numerous factors around how fine-grained the work is divided in the project.

Commit early, commit often, commit anything, push

The corollary of our high bar for work that goes on master, is an extremely low bar for work that can go on to a branch and be pushed. We rarely go to sleep with uncommitted on pushed code. When we need another developer to look at our work in progress, we commit it and push it as work in progress.  When a new developer starts, they often commit and push code to a branch the first day. By keeping this bar low, we provide the maximum opportunity for new developers to become fluent with source control and to use our tooling as a communication mechanism (“here is some code, when you look at it for me?”) Early and often.

Squash out the mess

There is an inevitable, human tendency to be careful with anything that will be “part of your permanent record”. This is built into all of us, though it seems to be more pronounced in some cultures than others. But by combining our very low bar for what can be committed and pushed, with a policy of squashing minor steps together to yield an aggregated, high-quality change, we moderate this tendency very effectively.  Code can be pushed any time for just a casual look. It won’t be part of the permanent history until it is good enough. No fear.

Left to its own, the tendency to be careful would force developers to “raise the bar” of what gets committed or pushed. A rational developer (regardless of policy) in such an environment must be careful about what they put in source control. We have found that this inevitably pushes developers to do more of their work without source control. For example, by shuffling files locally outside of the source control tool, by leaving work on pushed, etc.

Source control is a very powerful tool, and a low-bar-push-often-squash-rebase approach enables that power into the hands of every developer, individually as well as all developers working together.

But squash carefully

Still, squash and rebase are potentially troublesome features. So when we squash, we do not squash arbitrarily. We aim to squash groups of related changes, worked on along a path to achieve a specific feature, but not to squash unrelated work. One minor caveat though: to tell the story of the development in the most comprehendible way, the permanent master commits should be both of medium-size and medium number. Therefore we sometimes squash together a set of individually very small changes, when we can do so without creating difficulties.

Use a GUI

Many very skilled developers are fond of command line interfaces, for many good reasons. The Git command line interface, while slightly problematic in some ways, is extremely powerful for making changes. However, it is not particularly well-suited for understanding a Git commit tree, especially on a busy project with numerous concurrent branches. For this reason, when working heavily with Git we always use a graphical interface at least to visualize and understand the commit tree, even when we use the command line for manipulation. There are various high quality choices for GUIs on each platform, and on all platforms the built-in “gitk” interface is very helpful in a pinch to understand the commits – without any additional tool selection, download, or install.

Work together, learn, teach

We work together regularly. A group of developers working on the same project will set in a collaboration space together (inviting remote members online) at a giant screen attached to a fast development machine. We work on the hard problems together, we refer you tricky code together. We learn and teach together, how to use all of our tools, including Git.

Caveats, limitations, and context

Small, medium, and large, but not huge

Git, and the practices here are useful on our projects of small, medium, or largish size. Beyond a certain size, many of the practices would still work but Git becomes awkward. For example, some of the largest software operations in the industry (Google, Facebook) use a “monorepo” with all of the code for all of their projects and nearly all dependencies thereof. A different kind of toolset is needed for such extremely large repositories.

Source code, but few binary assets

Further, we keep small, medium and somewhat large collections of source code in Git; when our work has occasionally involved many large media assets, we have used other means to track them. Git specifically, is probably not a suitable choice for (for example) a large-scale game development effort with hundreds of gigabytes of binary assets.

Skill and understanding needed

Here at Oasis Digital, we are not in the “get a bunch of people to grunt out some code” business. Instead we are in the “recruit and hire good people, and help them become great” business. This is necessary for us, because of the premier customers we serve, and because we not only develop, we also teach. We are all about deep mastery of tools. Our Git approach fits very well with this context.

We’ve sometimes heard the objection to Git, that it is difficult for less skilled developers to understand fully and use correctly. This is possibly true, but it is not that important to our context – and we think it is not that important at all. Anyone with the capability of understanding software enough to develop quickly, efficiently, and with quality, certainly has the capability to master Git.

With great power…

Even with good understanding, Git is still relatively easy to misuse. It offers numerous very powerful commands, it is like a chainsaw and a machine shop. It is not like a set of safety scissors.

We have found that this power punishes developers who don’t look at a graphical representation of the commit tree regularly. It rewards developers who do.

Syntax

Even keeping in mind the power of the Git command vocabulary, it’s CLI syntax leaves much to be desired. It shows clear signs of having evolved awkwardly, and had early accidental complexity retained for backward compatibility.

However, we have found that developers who have the discipline to look at a graphical representation of the commit tree, generally gain a better understanding of the operation they want to perform and are more likely to perform correctly even when using the CLI.

Beware neutered GUIs

Some Git GUI products aim to simplify Git “for the masses”. In the process they managed to do a terrible job of the most essential function, visualizing and understanding the commit tree. Beware of a tool which resists showing you this tree. Even following our practices described here (where master will be a straight line most of the time), it is necessary frequently to understand a bunch of concurrent possibly tangled branches when a project is under heavy many-developer work. The right GUI makes this fast and easy.

Churn

I think the Git community is now on the third major mechanism for subproject use. We avoid subprojects as much as possible, but experience some frustration when we must use it.

Revisionist history

Our approach clearly and intentionally yields a revisionist, streamlines history. As described above, the resulting history is crafted for understanding, and for (rarely) backing out the final set of related changes for some work rather than miscellaneous partial changes. Still, some people are uncomfortable with the revisionism, and prefer a full record of every step along the way. This is a trade-off, where we have found the revisionist approach has more upside than downside.

Discipline needed

Even with the restrictions, it still requires discipline, to avoid creating a branch from a spot in history (something not on master) that will change in the future.

The Oasis Digital Spectrum of Services

In the early years of Oasis Digital, we offered exactly one service: outsourced software development contracting. Since then, we’ve expanded to a spectrum of related services. The result doesn’t fit in an “elevator pitch”, but it meets the needs of customers much better. Our “spectrum of services” will be more clear on our main website and elsewhere over time, but this blog post explains it as succinctly as we can. Ranked in approximate order from smallest to largest, we offer:

1) Free technical content

Our expert developers/trainers speak and write about relevant technical topics, and the results are almost always freely available: talks on our YouTube channel, posts on our blog, our twitter accounts, and so on. Further, we attend various conferences and are always happy to speak to people who come talk with us there.

2) Tickets to public classes

We teach on several technical topics, most prominently Angular Boot Camp. Class tickets are a great fit for an individual developer or small group, who can purchase, then attend online or in person around the US and occasionally around the world.

3) Private training classes

To train a whole team at once, we offer private classes, both online and in-person. Private classes can also easily be extended with add-on days of customized consulting and training, for customers looking for added value. Some customers engage us for a series of private classes.

4) Application assessment

An application assessment is a short consulting engagement (typically 1 week, with 1 or 2 of our experts) in which we meet with a new customer in-depth, to assess an application (or understand the vision of an application). The assessment includes a written report, and (if needed) a proposal for future recommended work.

5) Ongoing expert assistance

Oasis Digital can provide ongoing expert assistance, in a retainer-like arrangement. We regularly meet with your developers, to help with questions, issues, code review, design guidance, and implementation of key areas. We have different packages depending on how much assistance each month you need.

6) Agile product development

In an Agile development project, a Oasis Digital works with a customer on an iterative basis, prioritizing features (“stories”), responding to changes and guidance. Such a project is especially amenable when the product vision is established but feature needs are still evolving. An Agile project is straightforward to contract and price (based on the team size), and can start quickly then last as long as needed. We often begin Agile projects with Oasis Digital developers, then gradually integrate customer developers over time for an eventual handover. The project style is also well suited when Oasis Digital is joining an existing effort already in progress.

7) Scoped product development

In a scope product/project, the features needed are worked out in advance (a scoping effort might be part of an application assessment, for example) so that Oasis Digital can provide a price and schedule to achieve that known list of features and surrounding goals. This style of project is decidedly less agile (changes and additional features are generally implemented after successful delivery of the initial scope), but can also ultimately be more efficient – our experts are especially adept at skipping directly to a high quality approach, avoiding false starts and reducing rework during iteration.

 

Building a proof of concept – off the ground in a few weeks

Many iterative-development thinkers have a notion of an “iteration zero” at the beginning of a project, that does not involve much software development but rather understanding the problem, choosing technology, choosing a set of features for a first major release, and so on. That work is often described as what happens at the beginning of a project, the beginning of ongoing work.

Having worked with many customers in the early stages of a project, we see a place for a small project-before-the-project. Not necessarily a first iteration to start ongoing work; but rather work before decision has even been made about when or whether to do a full project, to commit to ongoing work.

A customer comes to us, eager to show the potential of a project, but not yet in a position to commit to a long-term effort, nor eager to create a pile of non-software artifacts. Rather, they want to quickly show that something could work, how it could work, have some code in hand to serve as a working rough prototype. These needs will not be met by starting an effort that will take many months and many dollars to yield the first working result.

This is the problem we solve with a proof-of-concept engagement: One short pass through the whole development cycle. Such a project is a limited time and scope effort to understand a problem domain (just a little bit) and then write some software (just a little bit) that demonstrates the value of a customer’s idea. Such a project looks something like the description below, of course the details vary.

(A proof of concept effort is very workable for a greenfield project, not applicable inside a system that has an existing substantial code base. For those, we have another kind of initial engagement, to get an existing system up and running and survey its code base.)

Scope

  1. Meet with customer domain experts, typically for half the day for a couple of days. Understand enough to do initial, relevant work.
  2. Sketch (with a mix of code, drawing tools, and paper) what some of the most critical screens will look like.
  3. Verify with customer domain experts that the ideas we have captured are the most relevant to their vision.
  4. Implement: expand the most important aspects from mere static screens or drawings. Create working prototype code.
  5. Integrate working screens with other static mockup screens.
  6. Present the working prototype software to the customer, including source code.
  7. Assist the customer with installation of the working prototype in their environment, to enable easy internal demonstrations.
  8. Present a video demonstrating the working system, we have found this valuable especially when the prototype needs to be shown to a potentially wide audience around the customer organization.
  9. Discuss future steps with the customer.

The bulk of this work is understanding the problem and creating working code. A proof of concept effort is not about creating another long requirements document. It is not about working through all the details. It is about code that demonstrates a working implementation of the essence of the customer vision, demonstrating that if implemented it would solve a problem worth solving.

Technology

To get the maximum benefit from a short effort, we use technologies our team has extensive current experience with. Typically that will be (as of early 2017) Angular or React on the front-end, Node or Java on the backend. We work with numerous other technologies for production efforts, but some of them are less amenable to shipping a working result within days of project start.

Team

To delivery a working result in a a short time, the team on this kind of project consists of:

  • Two core, highly experienced developers (developer / trainers)
  • Assistance from other developers
  • Assistance from a designer
  • Assistance from a project manager

Schedule

The work happens over a 1-2 week period. Scheduling can be tough, because the developers involved will also be developers who have the deepest mastery of the technologies to be used – developers who also teach our classes, lead project teams. We work with customers to choose the right start date to do this successfully.

Location

Typically this work happens at our headquarters; with the meetings conducted with the usual remote meeting technology, or occasionally with a customer expert visiting. It is also possible to send a team to a customer site for this work – although with that variation, there is less opportunity for additional Oasis Digital team members to jump in.

Price

We charge a fixed, all-inclusive price for a fast proof-of-concept effort. Since the investment is known up front, there is no risk of exceeding an agreed budget, missing an estimate, and so on.

Where to go next

After a completed proof of concept project, our customer has working prototype software; but it is not a prototype in the sense of very poorly, hastily built code; because it is built by our instructor/developers, it is of surprisingly good quality, ready to form a starting point for a more substantial development effort.

If a development effort is warranted (sometimes the thing you learn from a working prototype, is that the idea is not worthy of major investment after all!) a customer in-house team could pick up a working prototype source code and run with it; or of course we at Oasis Digital can be involved in ongoing work.

Sounds good, how do I buy this?

This is just a blog post; there is no “buy now” button. Contact us to talk about your project, if it is suitable for proof of concept project, we can send a proposal.

 

Angular 2+ Build Tooling – Recommendation

As of December 2016, what tooling should be used for a new Angular 2 project?

This is a question we get from customers and students frequently. Here is our current best advice, which will change over time. The context is critical: projects that may start small but are likely to grow to significant, complex enterprise applications.

Here is the path we have been following and recommending. At Oasis Digital Digital we have had excellent results with Angular 2 (soon to be 4, and beyond).

Use the official Angular CLI, which is full of excellent ideas but is also still in development, working toward a solid “V1”. As your project starts out simple, it is extremely easy to get up and running this way, and to get very good results. Highly recommended.

As your project grows in complexity, consuming and using CLI will need some ongoing attention from a team member on your project. A complex Angular project needs at a build guru, who should:

  • Tune into the Angular CLI community, become aware of what is going on with CLI
  • Visit the Angular CLI issue tracker once a week or so, read some recent issues
  • Read some recent commits, especially when thinking of upgrading
  • Visit the Angular CLI Gitter channel from time to time
  • Choose an Angular CLI version wisely. For example, as of mid-December 2016, CLI Beta 21 is the right choice for most projects while the more current Beta 22 will land you among some current challenges around AOT compilation and third-party libraries.

With this awareness, when you encounter difficulties you will likely recognize what is going on, and be able to work around it quickly. This has been our experience, certainly; we have never had our progress delayed by more than a short time, by build issues. But if you don’t have this awareness, you risk a build issue derailing your project for days or more.

If your project becomes so complex that this strategy for using Angular CLI does not work, Eventually it may be necessary to set aside Angular CLI for a while, and instead adopt one of the Angular 2 “seed” projects. These projects typically ship with a surprising amount of complexity which will become part of your project, and to the extent you edit any of this complexity, upgrading becomes difficult. Therefore we recommend not starting with a seed, rather having it only as a fallback plan if your project reaches a point where it cannot proceed with CLI.

 

Now Hiring, Angular+fullstack+polyglot developers, St. Louis MO

Update: We’ve hired, and have more hiring in the pipeline. Thanks to everyone who applied. We will post again for more hiring in the future!

—————————

Oasis Digital is now (and has been for a while) looking for more great developers to join our team. All the usual information is available on our careers page.

We’re also trying something a little bit different, and it seems worth of a blog post. The typical hiring process is generally analogous to a sales process, followed by an interrogation. It works something like this:

  1. Put up a job post, a form of advertisement, with just a small amount of information to entice a wide swath of applicants.
  2. Receive a big pile of resumes.
  3. Identify potentially strong candidates among them.
  4. Engage in a series of conversations one-on-one with each of those folks, in which the first major step is to explain a bunch more about the job to try to persuade the strong candidates to keep interest.
  5. Evaluate the candidates, interview, ask gotcha questions, squabble about pseudocode on whiteboards, etc.

We are experimentally trying something different, a process is more like this:

  1. Put up the job post, but have it include extensive information about how we work. There is a lot about how we work with is not particularly secret, so why keep it secret?
  2. Receive the resumes, identify the candidates, have the conversations, etc. as usual.
  3. Rather than pseudocode and whiteboards, part of our interview process is to code with (as a pair/ group / “mob”) strong candidates. A couple of hours spent doing that sheds light on a mutual fit for working together. It’s very easy to do for local candidates, and only a bit harder with remote candidates.

This has two essential differences from the typical process:

A more marketing-like approach to information dissemination. We are simply handing out a lot of the information about what it’s like to work here, anyone who wants to watch a video. It’s like marketing where you can just watch at any time. It’s not like sales or you have to submit your information and get into a one-on-one discussion.

A more collaborative approach to figuring out if there is a good mutual fit. Although inevitably sitting to code with other people has an element of being “on the hot seat”, it is less adversarial than the traditional technical interview.

Will this work? We don’t know, were just trying it. Stay tuned. And of course, if you may be interested in working as part of the Oasis Digital team, please see the careers page!