Task Based User Interfaces

Here at Oasis Digital, the bulk of our work is on complex business data systems. checkThese projects sometimes involve a so-called task-based user interface. Briefly, this is an interface where the operations available to the user are presented in terms of the problem domain, rather than in terms of editing data. Other names for this idea include “task-focused user interface” and “inductive user interface”.

Origins

I was unable to determine the specific research/practice origins of task-based UI, but it is most commonly seen as arriving from three main directions. First, the inductive user interface guidelines published by Microsoft around 2001 lay out a detailed description and motivation. Second, the domain driven design (DDD) and command query responsibility separation (CQRS) communities often described task-based UI as suitable for use in front of the system built with those principles. Third, simply looking at mass-market software there are numerous examples.

CRUD

Task-based UI is the opposite of CRUD UI, a convenient foil. Many software systems have numerous features which are readily described as CRUD: create, read, update, delete. Some developers describe their work as primarily creating CRUD systems, with either pride or disdain. When analyzing a proposed feature or proposed software project, we sometimes evaluate the extent to which the project is “just CRUD” versus behavioral functionality. Brought to the user interface, CRUD typically means screen after screen in which users can make an entity of some kind, find an existing one, update one, or remove one. In other words, a minimal veneer over basic database operations.

Simple CRUD user interfaces have the advantage that they are generally easy to create, and amenable to common tooling. Many development environments offer wizards or libraries to ease such creation, and this dates back to at least early 1990s, and probably much farther than that.

Therefore, CRUD has a lot of appeal when first creating a new application. Unfortunately, a system built primarily around CRUD tends to make behavior beyond minor validation unduly difficult to implement. If you find yourself squeezing business logic into an “on change” or “on click” event handler somewhere, you are probably experiencing this difficulty. As systems grow in complexity this bad fit can consume vast amounts of time and money.

There is a way out, and that is a task-based user interface, one which presents the user with options and actions which fit the problem domain of the software, rather than those which appear to manipulate data in trivial ways.

Change of Address Example

A common example to explain task-based UI is a change of address operation in a system which has subscribers, members, or some other class of people to whom things might be mailed etc. The CRUD approach to change of address is simple: the user searches for the appropriate user record, views that record, moves the cursor to the address fields, types over those values, and clicks “save”. Very convenient, very easy to program.

Unfortunately, down the line if there is nontrivial behavior in the system, it can be extremely difficult to determine what such an edit means. Was the address corrected because it had been mis-entered in the past? Was the ZIP Code changed because the post office changed the ZIP code assigned to that physical location? Or does this edit represent the person moving to a different place of residence? If the latter, how might that affect them in terms of the problem domain?

To avoid this, a task-based user interface would not only present address fields which can be edited. Rather, the user would see information about a person (including their address) then have options (buttons or whatever else is suitable in the visual UI guidelines at hand) with names like “correct mis-entered address”, “subscriber moved to another residence”, etc. Such explicit actions make it possible to capture the underlying domain-level meaning, rather than just edits to data fields. A downstream system receiving this information could then respond appropriately.

Choose Your Words Wisely

Task-based user interface is therefore to a considerable extent about words: the words which describe actions the user can take. If a system merely uses a task-based user interface on top of a CRUD-centric back-end implementation, then the words can at least be chosen well in the user interface. If the system uses a domain driven approach internally, then some of the well-chosen words in the user interface may become part of the ubiquitous language used throughout the implementation.

I encourage developers to consider a task-based approach even if the underlying system is not particularly domain driven.

Add, Edit, Delete, +, –

Speaking of words, having built numerous user interfaces, I very often still “default” to wire framing with button labels like “add” and “delete”. Such things are fine for an early initial wireframe, but task-based design principles suggest reconsidering whether a more specific set of operations is suitable.

An example came up at work today with a list of employees, and the initial proposal was a “delete” button next to each one. But one does not merely delete an employee: at even the most heartless of companies there will be quite a lot more process involved in removing an employee from a list of employees. A system which manages the employees then, should probably have some sort of process flow around that removal, and it should use words other than “delete”.

A Final Quip

If the user wanted Excel, they would use Excel. If they are using an application which is instead specific to a problem domain, then it should have features, both internally and at the user interface level, relevant to that problem domain.

Links

http://en.wikipedia.org/wiki/Task-focused_interface
http://msdn.microsoft.com/en-us/library/ms997506.aspx
http://codebetter.com/iancooper/2011/07/15/why-crud-might-be-what-they-want-but-may-not-be-what-they-need/
https://cqrs.wordpress.com/documents/task-based-ui/
http://stackoverflow.com/questions/12255874/what-is-an-example-of-a-task-based-ui
http://codebetter.com/gregyoung/2010/02/16/cqrs-task-based-uis-event-sourcing-agh/

ES6 for AngularJS – Angular Lunch

A few months ago, Oasis Digital started a monthly St. Louis Angular Lunch. At the November 2014 lunch, we had a guest speaker: Mark Volkmann of OCI. Mark talked about ES6 and the Traceur compiler, and briefly how they fit in to developing Angular applications today.

Transcript

We have transcribed the talk about to text, provided below. This is a rough, first-draft transcription, so any errors are probably from that process.

Continue reading ES6 for AngularJS – Angular Lunch

JIRA Jumble, a Test Data Generator for JIRA

At Oasis Digital, we teach a JIRA class in which students learn hands-on how to build workflows, administer JIRA, and overcome common challenges. While writing the course material, we realized that it is difficult to fully understand the features unless you have a significant amount of data in JIRA: issues, projects, users, with important dates in a range of dates near the date of the class.

One obvious way to obtain this is to use or copy a real (“production”) JIRA instance, but students are wisely hesitant, as are we, to bring up a copy of a production JIRA instance (full of proprietary information) to use in class. Instead, we need to be able to generate a populated JIRA instance with random but useful data. In the past there have been tools for this from Atlassian or other makers, but we found none of those tools are available for the current (as of 2014) JIRA versions; and they had been packaged as JIRA plug-ins for JIRA Server, which means they cannot be installed into JIRA Cloud (formerly called JIRA OnDemand).

We solved this problem by creating JIRA Jumble, a random data generator tool for JIRA. We then realized that it would probably be helpful to many other people using or learning JIRA, so we made it available online at http://jumble.oasisdigital.com. We expect to add additional features and enhancements over time based on the class needs and the feedback we receive.

JIRA Jumble  Test Data for JIRA

Please give it a try, we are eager to hear what people think of it.

As a reminder, this test data generator is intended to be used to populate a new, empty, throw away JIRA instance for testing and learning. It works with both JIRA Server and JIRA Cloud. Do not import its data to your production JIRA Server or JIRA Cloud instance!

(photo credit)

Data Services for AngularJS Applications

I wrote recently about data/API services for complex “single page” JavaScript-heavy web applications. Everything there applies to AngularJS as well as other frameworks (Ember, Knockout, React, etc.), but there are some particulars to keep in mind for ease of interaction between an AngularJS-largeAngularJS web application and a data service. This topic is also asked about in almost every “Angular Boot Camp” class I teach, so by posting this I have an online resource to point at.

Use RESTful URLs and HTTP Methods

By following RESTful patterns rather than inventing ad-hoc patterns for a server API shape, your code will fit in more easily with the rest of the software industry. This reduces the friction in bringing on more developers, in integrating your application with others, and so on. A RESTful server API will operate easily with AngularJS’s built-in mechanisms for working with a server, whether you use plain $http, $resource, or Restangular. For example, all of those already assume that the HTTP status codes indicate success and failure, so do not make up your own way of encoding failure within HTTP “200” success.

Avoid dogma in REST design, though. Most of the time, you can make a server API shape reasonably RESTful with approximately the same implementation effort as something ad hoc. If you have a case where a RESTful design will require substantially more work than something else, consider an “escape hatch”: a portion of your API that isn’t particularly RESTful, hidden behind a generic POSTable endpoint.

Make Data Binding Easy

I assume that by choosing AngularJS, developers aim to leverage its strengths rather than fight it. That means embracing “HTML enhanced”, data binding, and other banner AngularJS features, rather than manually manipulating the DOM (which immediately fails code review here), mostly passing information around using bindings and $watch rather than events, etc.

Therefore, if possible make your data service return data in a shape already suitable for binding in your AngularJS application. Done well, this can reduce the amount of JavaScript code you must write to populate a complex screen to almost nothing. By complex, I mean a screen which contains the modern equivalent of numerous master detail relationships, hundreds of fields, etc.

Of course, JavaScript is a powerful and convenient language for data structure manipulation, so in a sense it does not matter what shape of data is returned from a server; it is always possible to process that data into a shape suitable for data binding. Hence this guideline is oriented toward ease and speed of development when the server code is being written fresh.

(As an aside, I am not 100% sold on data binding; there are other approaches, like that taken by React, which have significant advantages. But if you are using AngularJS, embrace its strengths. Use data binding pervasively.)

Make Commands / Saves Easy Also

Following along from the above, a conveniently and idiomatically crafted AngularJS application will typically yield JavaScript data structures suitable for PUT or POSTing back to a data backend. Make your server accept PUT or POST of updated or new data in the same object “shape” as it returned data. This way, the data can make a round-trip from the server, through data bindings, to be manipulated by a user, then back to the server; all with very little code to write.

This applies to commands (in the CQRS sense) also. If a user is working with the screen via data binding, then that data binding can often yield a data structure ready to send to a server with a couple of lines of code. If you find it necessary to write extensive lines of code to formulate a server interaction, perhaps either your data binding or server should be adjusted stat.

Consider NodeJS on the Server Side, for Code Sharing

Working with complex client-side web development, you will almost certainly use NodeJS in your development toolchain, even if you are not using NodeJS for your data services.

But NodeJS is worthy of consideration for this key reason: it enables code sharing between the front end and back end. With some care in crafting modules, chunks of functionality which must be implemented on both sides of the client/server API (such as validation, but also many kinds of domain logic) can be reused across both, ensuring consistent implementation and eliminating duplication.

There are numerous other platforms worthy of consideration for other reasons. Stay tuned here (and follow me on twitter), I will have more to write about that.

Guard Your Innovation Budget

If your organization is in the process of adopting AngularJS for the first time, consider not simultaneously adopting a new server-side technology. As humans we each have a certain ability to absorb change, and doubly so for organizations comprised of many humans working together. You will probably get better results if you switch to a new client-side development toolkit in a different year then you switch to a new server side development toolkit. I think of this as having an “innovation budget” which can either be concentrated on one layer at a time, or diluted across more than one layer.

Help Wanted

If anyone knows of convenient source code examples already online of doing these things well versus poorly, please contact me – I would love to add links. Given sufficient time I may create the examples, but with the pace of change in our industry these tools could be obsolete before that happens!

 

Data/API Servers/Services for Single Page Web Applications

A Client needs a Server, or at least a Service

Over the last few years the team at Oasis Digital has created various complex “single page” web applications, using AngularJS, KnockoutJS, and other tools. These applications’ assets and code are be statically hosted (cheaply on  CDN if needed), but of course each application needs a backend data service (possibly comprised of smaller or even “micro” services internally) to provide data and carry the results of user operations to permanent storage.

Context and Background

Our work context is typically data- and rule-centric line-of-business applications, hosted in a company data center or on virtual/cloud or dedicated hardware, or occasionally a (more cloudy) PaaS like Heroku; so advice here is for that context. “Backend as a Service” solutions are appealing, but I don’t have any experience with those.

The systems we build most often store data in a traditional RDBMS like PostgreSQL or MS SQL Server, but data service needs are similar with a NoSQL or other non-RDBMS data store. (Among the many topics I should write about: making effective, justified use of polyglot persistence with CQRS and related ideas.)

We have also worked extensively with multi-tier desktop applications, which have essentially the same data service needs, albeit with different data serialization formats. As a result, we have worked on and thought about data services extensively.

Building a Data API Service

For convenient, rapid, efficient development of robust data/API services, your tool set should have as many as possible of the following:

  1. A server-side programming language / platform, with its runtime environment, native or VM.
  2. A way of routing requests (hopefully with a RESTful pattern matching approach).
  3. Automatic unmarshaling of incoming data into data structures native to the programming language. If you find yourself writing code that takes apart JSON “by hand”, run away.
  4. Similarly, automatic marshaling of ordinary data structures into JSON. If you see code which uses string concatenation to build JSON, it should either be to meet some specific needs for extra marshalling performance of a huge data structure, or shouldn’t be there at all.Database_icon_simple
  5. Analogous support for other data formats. Successful systems live a long time. If you create one, someone will want to talk to it in a situation where JSON isn’t a good fit. A few years from now, you might think of JSON the way we think of XML now. Therefore, stay away from tools which are too deeply married to JSON or any other single data format.
  6. If you’re using a relational database, your data server toolkit should make it quite easy to talk to that database. Depending on the complexity of the relationship between your data services and the underlying data store, either an object relational mapper (ORM) or perhaps a table/query mapper is suitable. If you find yourself working with the low-level database API to pluck fields out, you are looking at a long investment with little payoff.
  7. Good support for a wide variety of database types (relational and otherwise). This reduces the risks from future database support requirements.
  8. A reasonable error handling system. Things will go wrong. When they do, an appropriate response should flow back to the client code, while a fully detailed explanation should land in a log or somewhere else suitable – ideally without re-inventing this on each project or for every API entry point.
  9. Depending on application needs, some way of maintaining a persistent connection (SSE, websocket, or fallback) to stream back changing information.
  10. A declarative way to specify security roles needed for subsets of your API (RESTful or otherwise).
  11. Monitoring / metrics.
  12. Scalability.
  13. Efficiency, so you are less likely to need to scale, and so that if you must scale, the cost isn’t awful.
  14. Rapid development supported by good tooling. Edit-compile-run cycles of a few seconds.
  15. A pony.

Is That All?

This is quite a checklist; but a toolset lacking these things means that a successful, growing project will probably need to reinvent many of them – shop carefully. In later posts, I’ll write more about particular technology stacks.

 

Taking CSS Seriously

At Oasis Digital, we have spent much of our effort over the last several years creating complex “single page” web applications. There is much to say about that, but today I’m writing about one specific sliver: styling the application pages with CSS. To do good work at scale when using CSS to style an application (versus a one-off “webpage”), the only path to success is to take CSS seriously as a language, and study how to use it in an idiomatic, semantic, maintainable way.

Unfortunately, thinking about CSS in this way is considered far into “advanced” CSS territory, and these topics received almost no attention from the countless online and off-line resources to learn CSS.

What does it mean to take CSS seriously? Briefly, it means learning the technology and patterns of use in depth, by studying what others have done and thinking about why and how to solve problems.

Idiomatic CSS

CSS written by experts tends to follow many idioms. These are patterns that have been found useful again and again: a consistent order of selectors, consistent use of white space and other formatting, and so on. There is a popularly cited online resource explaining some of the most common idioms, and A List Apart (which you should probably read extensively) has a busy topic area around CSS.

Robust CSS

css-is-awesomeDo you always use the same browser, with your window the same size, with the same settings? Then your CSS is probably not robust at all. It is quite easy to accidentally style in such a way that the slightest disruption in the layout causes unexpected unpleasant results.

The way to robustness is exposure to ongoing change. Use multiple browsers, and switch often. Adjust your font size every day up or down, for a good reason or no reason. Change the width and height of your browser window, don’t just always leave it maximized at whatever very common screen size you happen to use. Do you work on a PC? Try a Mac sometimes. You work on a Mac? Your CSS-styled pages probably change in some minor but irritating way if you haven’t looked on a PC.

Good CSS is robust. It is specified sufficiently but not over-specified. A particular anti-pattern to avoid: layout some elements; observe their width on your particular browser and settings; set the width of an element that is supposed to contain them to a hard-coded number based on that single observed width; then watch what happens when some minor difference in font or other matter makes something a little bit wider than you expected.

Appropriate CSS

A hammer is a great tool, but not every problem is a nail. Grid systems are wonderful, but not for every element on every page. To take CSS seriously, we must learn when and where to use grid systems, ad hoc floats, that shiny new flexbox, and even an old-fashioned table.

Layered CSS

Many projects today start with a CSS framework like Bootstrap or Foundation. Working in this context requires a keen awareness of how your CSS interacts with that provided by the framework. It is common to see CSS which randomly and arbitrarily overrides rules set by such a framework, with no regard for what sense those framework provided rules made. The typical result: layer upon layer of overrides, trying to fix a resulting problem by adding layer upon layer of of more overrides.

The key to winning this battle of layers is to stop fighting. To use the delete key extensively, trimming away excess CSS to fix problems rather than adding more. Adjust variables provided by your framework, instead of ad hoc overwriting its calculated class attributes. Understand exactly what layers are needed and why.

Semantic CSS

“Semantic” means “pertaining to the meanings of words”. But the problem many of us have around semantics isn’t so much of wrong meaning, but of applying the wrong names to the right meaning.

It seems everyone is taught, on their CSS journey, about the evil of the in-line style attribute. As a result, we see in the wild many CSS classes like this:

.float-right { float: right; }

Thus allowing the CSS user to type class=”float-right” instead of style=”float: right;”.

This of course misses the point of the admonition against in-line style completely. It is not the literal use of the style attribute that is the problem, it is that the idea behind CSS is to mark up content with classes which say something about what the content is, then use CSS separately to apply a visual layout and so on.

If you can look at the name of a CSS class and know exactly the style settings it contains, it is probably not a very good name. But if you can look at a piece of content, and tell what CSS class you should apply based on what kind of content it is, you probably have a very good name.

Did I mention, don’t actually write CSS itself?

CSS is lacking critical facilities for abstraction and removal of duplication. There are persuasive arguments floating around that CSS is too broken to ever be fixed; and I would not argue against that. But given that we are mostly stuck with CSS for many types of applications, it is necessary to use a layer on top of CSS. The usual contenders are Sass and LESS. This means that all of these goals mostly apply to the Sass or LESS you write, rather than to the CSS generated from Sass or LESS.

People who have studied both usually conclude that Sass is a better choice than LESS, and indeed the former is what we most often use.

(A helpful tip: if you are using a CSS framework like Bootstrap or Foundation, use a version of those which is supplied using whichever language about CSS you have chose. For example, if you use Sass, use the Bootstrap Sass version – don’t use the normal Bootstrap LESS or the bare Bootstrap CSS files!)

Maintainable CSS

The net result of all of the above is maintainable CSS. CSS which you can keep adding to, in an incremental way, as the application you are styling continues to grow and improve. CSS which you or someone else can understand a year later. CSS which you can be proud of when someone looks under the hood.

Balanced CSS

CSS is not ideal; it has gradually accumulated features since the emergence of the web and therefore has legacy concerns, among other problems. The CSS to solve any particular problem might not be able to meet all of these design qualities at the same time. Therefore, good quality CSS strikes a balance: semantic enough, robust enough, idiomatic enough, and so on.

Rapidly Written CSS

Some of the ideas here sound like they could take a long time to code – but good CSS is faster to complete – bad CSS burns up far more hours than good, in the quest to create a finished system.

Of course, mastering tools can take a long time. The payoff is that a master usually creates code (CSS or otherwise) with high “first time quality”.

</rant>

At Oasis Digital, we are not perfect. We do not yet do all of these things perfectly all the time; there is sometimes a tradeoff between delivery speed and polish. But we do take CSS seriously, and we work hard to move in the right direction along all these axes.

 

Bits are Free, People are Valuable

A few days ago, I caught myself thinking about whether to save some images and video; whether the likely future value of those megabytes would be greater or lesser than the cost of storage. This is a sort of thought that was important and valuable… a couple of decades ago.

Bits are Free

Today, and at least for the last decade, bits are so absurdly cheap that they can be considered free, compared to the time and energy of people. Storing gigabytes or terabytes because they might come in handy is not just for government agencies, it makes sense for all of us in our daily work.

Waste bits to save time.

Waste bits to help people.

People matter, bits are free.

Bits are Free at Work

Here are some ways I waste bits at work:

  • We often gather a few people working on project to meet or code together; it is very easy to start a video or screen video recording of the work. That recording can then be shared with anyone on the project who wasn’t present.
  • We record screenshots or videos of the future in progress, and send it to a customer. Yes, we could wait and present to them “live” using WebEx or whatever. But it is cheaper to waste the bits and conserve human coordination effort.
  • If I have something complex to explain to someone, I can do it in person, and I can do it on the phone, and I can write lots of text. But if I already know them well enough to partly anticipate their questions, I will start a video recording and explain something using voice and white board. The bits are cheaper than the coordination to work “live”. The bits are cheaper than asking them to figure it out themselves.
  • Driving down the road, it is unsafe to text, or read, or write, or (I am looking at you, morning commuters…) apply makeup. But while the numbers are unclear, we have a broad assumption that merely talking with someone while driving down the road is OK. I sometimes make use of traffic time, and burn some bits, by recording audio about some problem to be solved or other matter. The bits are free, who cares if this uses 1000x as much data as sending an email?

Wasting bits can grow somewhat extreme. In the first example above, I described a screen video of developers working together, recorded for the benefit from other developers. You might imagine this as a high-ceremony process, done on important occasions about important code. But bits are free – so we sometimes record such video even if no developer will ever pay attention to it – the value is there even if just one developer maybe lets it play in the background while they work on something else – much like by working in the same room as other people, it is common to pick up some important bit in the background. If one developer learns one small thing to make their work better, that is more valuable than 400 MB of video storage space.

Bits are freeBits are Free at Home

Here are some ways I waste bits at home:

  • When I take photographs, I take a lot of photographs. One might come in handy. Who cares if 95% of them are bad and 90% of them are useless?
  • Why do cameras have settings other than maximum resolution? Bits are free.
  • Nearly 100% of paperwork I get in the mail, other than marketing, I scan, OCR, and keep in a searchable archive. The disk space for this costs almost nothing. The time saved deciding whether to keep each item, and how to file each item, is irreplaceable. I probably only ever look at 10% of what is scanned, but who cares?
  • If my kids are doing something even vaguely interesting, I try hard to remember to take photos and record video. Looking back at when they were very young, snippets of video we have (from before every phone had a decent video camera) are priceless. I can’t reach back and record more of those, but I can easily record things now that might be fun in the future. Who cares if 95% of that video no-one ever looks at? If I ever need to go through it, I can do it then. The storage space in the meantime doesn’t matter.
  • If I need to look at a model number, serial number, etc. of anything around the house, I snap a photo of it. Then I can look back at the photo anytime from anywhere. Yes, it is absurd to store 3,000,000,000 bytes of JPG rather than 20 bytes of serial number. But they both round to zero.

How Free Can Bits Get?

I expect this tradeoff will shift even more exponentially in the future. In a couple of decades, perhaps:

  • We will record 5 petabyte ultra beyond-HD holographic recordings of insignificant activities just in case someone wants to watch them.
  • We will run video and audio recording of our lives 24 x 7, to be indexed just in case anyone ever wants to look back at it.
  • Future cameras won’t even come with an on-off switch, and will instead record continuously for the lifetime of the camera.

 

Google’s Go language

Google’s Go language (“golang”) was first released in early form in 2009, and reached 1.0 in 2012. Go has matured quite quickly compared to many other new languages. We find several aspects of Go appealing:

  • Go is a pragmatic language, with features chosen to ease and speed development of substantial projects.
  • Go uniquely combines a rapid cycle “scripting” language feel with a robust static compilation process.
  • Go’s compiler generates statically linked binaries; this is very “old-school” in 2015, but static compilation trades off disk space (which is cheap) for simplicity and reliability in production (which is valuable).
  • It is suitable for both small and large teams, although our work so far as been small.
  • Go has good support for concurrency, well-suited to software that scales up to run on many core machines.
  • Go has proven relatively easy to learn, for developers coming from a wide variety of backgrounds.
  • Go has a good standard library.
  • Go can target multiple platforms, yet does not use a separately installed runtime.

Go Development at Oasis Digital

Oasis Digital has made use of Go for various small systems, including:

  • Experimental programs
  • Development by interns in our intern program
  • Utility software, test-stub software
  • Minor customer work

We recommend considering Go for certain types of projects:

  • Small, standalone software for which static compilation will ease deployment and support.
  • Data services behind front-end web systems, suitable for local or cloud deployment.
  • Applications where the Go concurrency model (“CSP”) is especially appropriate.

Code Reviews – Use a gate, not a drive-by

Does your team / project use code reviews? If not, I suggest starting today. There are countless resources online about how and why to do code reviews; there are numerous code review tools, open source and commercial.

Who should review code? A great question, perhaps for other blog posts.

When should code be reviewed? Hopefully also great question, and the topic of this blog post.

Context

When making a recommendation, I’m trying to get in the habit of always mentioning the context, the situation in my work from which my recommendation arises. This is in response to seeing frustration when people try to follow advice, not realizing that their situation is completely unlike that of the writer.

Here ate Oasis Digital our context is mostly complex line-of-business applications, which are expected to live for a long time and are important to the business operations of the companies that use them. We are almost always replacing an existing system (which a company relies on) or expanding an existing system (which a company relies on).

Another critical part of our context: we have flexible and fast source control, and a group of developers who have learned how to use it competently. We have decoupled source control from build automation, so that we can perform builds of branches in progress without merging them to a mainline. We can achieve a sufficient degree of continuous integration without actually putting each change directly into the same code line.

Review Code Early

How long after code has been written should it be reviewed? Hours or days, not weeks. For this reason, we commit and push code frequently (to a disposable branch of course) where others can see it for early review. It can be a challenge to get past the ego-driven desire to make code perfect before exposing it to the scrutiny of others, but it is worthwhile. By exposing code to review and comment early in its life, a developer can get useful feedback early in a unit of work.

peer-reviewReview Code Often

Another anti-pattern of code review is to look at a given proposed change only once or twice. That is sensible for a very small change, but for a complex change to a complex system it is best to perform code review repeatedly as work proceeds. For example, consider a feature that takes four weeks to implement. Our typical approach, which I recommend to others (if their context is reasonably similar to ours, see above) is something like this:

  • Review the code a few days in.
  • Review the code progress once a week or so.
  • Review the code when it is nearly ready to be called “done” and to go in the mainline.
  • If there are fixes needed, review those promptly so as not to cause delay.

Review Code Now

Code waiting for review is a drag on the speed of development; the right time to review code is now. A quick review now is probably more valuable than a deep review deferred.

Review Code as a Gate

In some organizations, code review happens asynchronously and sporadically, after code is already part of a project. This risks software quality collapse. Once code is in the mainline of a product, it is arguably too late to review. If a reviewer finds a problem with code already in a system, there will be pressure to sweep that problem under the rug and move on, or to convince your that is not really a problem, or that quality doesn’t really matter, just this once.

The right answer is simple: use code review as a gate through which code must pass before it can become part of the mainline of the project. Use your source control system, and stack up proposed code changes that are nearly ready to go into the mainline. Then periodically perform your review process (whether it is in person, remote, synchronous, asynchronous, etc.). Once the code passes a final review process, then it goes in the mainline. If there are serious problems, the code sits in a branch, a developer improves it, and it tries again next time.

We have had excellent results with this approach, maintaining software quality for years on end even through ongoing development team turnover.

 

FlashFiler to RDBMS Data Converter

Transformer_Flux.svgOur work at a Oasis Digital often includes migrating legacy code or data into a new system. We often find off-the-shelf tools, or create ad hoc single use tools to assist that process, but occasionally something reusable emerges.

We recently needed to convert a lot of data from a legacy FlashFiler database (an Turbo Pascal then Delphi specific database from the 1990s) to something more modern. We took the opportunity to make a tool for this purpose outside of any specific customer project, so we are able to release it for anyone to use. It looks like this:

ffrdbms4

We have released this data converter as open source software. You can download it, or learn more, on its page on GitHub.

 

3D Configurator Intern Project

One of our specialties at Oasis Digital is in rules-based configurator systems of various types. For example, we have worked on complex model number configuration systems, quotation systems with complex sales commission rules, graphical system of configuration widgets, and other similar tools.

For our 2013 intern program, one group (led by Zach Kimberg) worked on a 3-D rules-based configuration system. This project lasted about half of the summer, and got far enough along to produce a visually interesting demo, shown below. The screenshot doesn’t really do it justice – it is dynamic and continuously moving.

swing-set-snap

This works entirely as a web application, with no use of flash or any other plug-in. It relies on WebGL, built-in to most modern web browsers. It is coded entirely in JavaScript, with help from AngularJS for the user interaction and Three.js for the 3-D model management. The 3-D model is hacked up and simplified from a 3-D model example from a website we have forgotten.

3d-config-still

We see this as a proof of concept showing that it is possible today to build complex rules-based configuration systems, with a rich visual display, that run easily in a browser. It is no longer necessary to build difficult-to-deploy desktop software for this type of need.

Update: We recently added this short video demo of the configurator. Unfortunately the video doesn’t show the rules in action – there are rules to block certain combinations of settings in the configuration. The video shows a slightly jerky rotation of the swing-set – in the live software, it is completely smooth. Enjoy: