A CMO and a CTO Walk into a Bar

shutterstock_258081302

Playing off of the obvious (and sometimes absurd) stereotypes of business and technology leaders inside product and services organizations and where they often direct their focus and priorities, our fictional CMO and CTO characters walk into a bar and order the same cocktail. While the CTO might be taking account of the ingredient ratios, mixing sequence, and transfer method from shaker to glass as the bartender executes the order, the CMO likely awaits the final appeal of color, aroma, taste, and most importantly, effect after delivery (especially if there is resource negotiation to be done with said CTO!). Regardless, both appreciate having a quality result in hand and getting down to business.

Especially important in the practice of software innovation and realization, the effort to translate and align different language, concepts, and perspectives among architecture, engineering, product development, and marketing disciplines towards a common user experience goal requires a person or team with hybrid or blended skills across design and technology along with a framework on which both objective and subjective requirements can be generated, analyzed, executed on, and measured.

Quite a few years ago I ran across a catchy acronym-driven framework discussed in Microsoft technical literature describing a checklist of non-functional goals tied to operational requirements within a solution architecture. "PASSMADE" represented the more quantitative aspects of performance, availability, scalability, security, manageability, availability, deployability, and extensibility that were commonly addressed in the background through best practices and software design patterns but never elevated to first class citizen status alongside qualitative attributes of beauty, form, function, usability, emotional connection, and marketability. The original premise of this framework was that if a designed solution architecture successfully accounted for each of these areas then the resulting product or service would have a much lower overall risk of operational failure. That sounded sexy to me at the time.

After several years of immersion into frog design (the global innovation firm that is part of Aricent) and the multi-disciplinary design process (aka "creative chaos"), I began to adapt these software architecture principles for broader use in the context of using emerging technologies to drive opportunities for innovation at both the UI/presentation layer in software and the overall UX. For each item in the checklist, I developed a working set of definitions and patterns to help translate considerations back and forth across the boundary between visual and interaction design and software architecture and engineering. This cheat sheet became especially helpful when leading collaborative workshops between designers and technologists (whether frog experts or a client's engineering team) early in the creative process where the deconstruction and inventory of a proposed design solution needed parallel technical exploration and validation. The end result is a framework I often use to not only avoid architectural flaws and reduce operational risks, but to also generate higher potential for a quality user experience and success in the marketplace.

Performance

Software architects and engineers usually think of performance in quantitative terms such as transactions per second (data services), page load times (web sites), frame rates (rich media), or number of triangles (3D graphics). They can instrument the software, measure the results, and retune the code. In contrast, visual and interaction designers typically look at performance through a more qualitative lens such as responsiveness to physical actions, smoothness of transitions, and crispness in rendering. They can see and interact with the software to instinctively know where it feels right or if something is off from what they had envisioned.

How the collaboration unfolds between these two disciplines is where either the process breaks down and a less than ideal user experience is delivered or the magic happens. In one abstract example, a designer may ask for something to "move faster" or "respond quicker" and a software engineer not well-versed in creative thinking may simply lower timing values in an attempt to literally speed things up. A performance threshold may be reached where the available hardware or software simply cannot "go any faster" to meet the design intent and frames get dropped or the UI freezes. A "creative technologist" that is able to see both sides would likely find and offer up a solution which exchanges raw speed in time for slightly lower resolution to achieve the desired effect while maintaining an acceptable level of fidelity. Most often the designer or the end user cannot detect the subtle change and both the experience and the underlying performance of the software is preserved.

Representing the business or marketing perspective in the example above, if not for a creative technology solution, an additional negative outcome might have been a call for more advanced and costly hardware in the product. The attempt to satisfy the performance needs of the design could result in greater cost to the business in the form of lower profit margins to maintain a desired price point in the market.

Here I will describe some techniques and patterns where both perceived and actual performance can be maintained or increased in the face of constrained hardware, limited software platform capabilities, or variable network speed and stability.

    • Progressive disclosure - an interaction pattern for loading and rendering UI components and data on an as needed basis as a user progresses through a task flow. While typically thought of in the context of interaction design for reducing complexity in the user interface, this pattern also has benefits in the performance of software loading and rendering times for the overall UI. Expanders, accordions, and step wizards are examples of UI components that are designed with the progressive disclosure pattern in mind. In other words, no need to load the kitchen sink all at once when the user simply needs a spoon to get started.
    • Variable data resolution - a performance optimization pattern for adjusting the granularity of data visualizations as a user zooms in and out of more or less detail. Classic examples include timeline graphs where data loaded and visualized at a yearly level is averaged to show a general trend versus a daily level where specific data points may be needed. Performance is greatly improved at the year level if the software only needs to load and render average points and draw relative connecting lines in between (months, for example). It is usually OK to fudge the data specifics into relative values when visualizing at macro levels.
      shutterstock_255600274
    • Optimizing frame rates - depending on the type and complexity of object being animated there are times when the difference in perception between rendering at, for example, 15 and 30 frames per second is essentially undetectable to the human eye, but the performance impact to the underlying rendering engine between the two is striking. Go with the lowest frame rate possible while maintaining the desired visual effect.
    • Simulated physics - in certain situations where the appearance of natural movement or real-world interactions is needed between objects, some software engineers reach into the toolbox for a full-blown physics engine or library when all that might be needed to achieve the design intent is a few easing functions or motion tweens. "Real" physics modeling engines often require all visual objects in a container to be spatially registered and tracked in real-time along with attaching various physical attributes which can create greater overhead on the layout and rendering pipelines in the software and hardware. There may be only a few elements that need to be moved around in natural form where some hard-coding with easing at the individual object level will do the trick.
    • 2.5D - (or pseudo-3D, or 3/4 view) refers to a technique where the illusion of three dimensions is rendered to the screen but the software is not mathematically generating and keeping track of a true three-dimensional object in code. Parallel projection, virtual light sources, Z-indexing, parallax, and drop shadowing are all examples of techniques in the creative technologist's toolbox which don't require the overhead of a 3D engine but still provide for a sense of depth in the experience. However, there are times when the overhead of true 3D may be justified because of complex interactions with other objects (real physics, see above) and the hardware acceleration capabilities are powerful enough to support. As with all of these techniques and patterns, it is a case by case subjective call that requires crossover design and technology skills and experience to blend art and science together.
    • Raster vs. vector graphics - simplifying for the sake of brevity and context, rasters are bitmap graphics with individual pixels drawn at single points in an image while vectors are definitions to describe how points, lines, curves, and shapes are to be drawn to produce an image. Rasters have a fixed resolution based on the number of pixels used whereas vectors are scalable across resolutions. Rasters are usually created or edited in photo management software (Adobe Photoshop, for example). Vectors are typically created or edited in illustration software (Adobe Illustrator, for example). Visual designs for software usually call for one or the other (or sometimes both in combination), but there are times when performance considerations lead a creative technologist to build solutions that may substitute the complexity of a vector graphic containing a very large number of data points with a generated and optimized raster image presenting the same visual output at a specific resolution. Additionally, there are cases where layers of raster images with alpha transparency can be more efficiently generated with only one vector-based layer available at a given time for more dynamic user interaction (for example, visual and data layers in mapping engines that can be turned on or off or interacted with by the user). On the other hand, using simple vectors to generate a large number of separate graphic objects is more efficient than producing many rasters to load from physical files. Pick the right graphic format for the job.
    • Data virtualization - a software engineering technique which loads and manages small chunks of large datasets in the background allowing for quicker presentation to the user and lower memory allocation, while offering the perception that the entire dataset is available on demand. This approach is typically used in list boxes where scrolling actions trigger additional chunks of data to be loaded into or removed from memory based on visibility in the list. Where creative technologists will apply their craft with this particular technique is in striking the right balance between the optimal scrolling speed of the list and the buffer size of data loaded on either side of the visible set. If the data virtualization process doesn't preload enough data into the buffers, the user may have a short wait period before seeing results after a scroll action. If there is too much data loaded, then memory allocation can be impacted. Additionally, certain gestural scrolling actions like fast flicks will likely trigger jumps much further ahead in the data set beyond the buffers, so the data virtualization process will need the smarts to adjust accordingly. Technical Kung Fu is often required to get those interactions optimized against the underlying data.
    • Pagination - a simplified form of data virtualization where the forward and backward buffers are fixed and the user cycles through a series of pages to scroll through the available data. In scenarios where it is more difficult to integrate near-real-time data virtualization with the UI/presentation layer technology, the classic fallback is pagination where the user still perceives working with the entire data set but is presented with the explicit choices of which chunks to access in each direction. It is often seen in e-commerce or enterprise web applications prior to the advancement of AJAX where each page would be loaded in full on each new data request and is still the gold standard for search engines like Google and Bing.
    • Lazy loading - another software engineering design pattern in the virtualization family but often used at an object or entity level and not necessarily tied to underlying data source (although everything is about data in a general sense). It is similar to the pattern of progressive disclosure in the UI/presentation layer where objects are not initialized or fully loaded with property values until specifically needed in the workflow. A "ghost" or proxy pattern is very common where the object skeleton is constructed in order to satisfy some particular referential constraints in the business logic, but the internal loading of the object is deferred until required. Creative technologists will often apply multiple patterns in tandem at both the UI/presentation layer and the underlying logic and data layers to achieve optimal performance within the user experience. Lazy loading at the business logic and data services layers works very well in conjunction with a progressive disclosure pattern at the UI/presentation layer.
    • Priority binding - yet another relation in the virtualization family, but with a more specific intent to support both intentional lazy loading scenarios and unintentional delays in data access (think "The Internet"). When wiring up UI/presentation layer components with associated business logic and data services, whether with linear procedural coding or using a standard model-view-controller type of pattern (MVC, MVVM, etc.), priority binding is used to specify one or more fallback values or alternate data sources based on availability and response times starting from the top of the chain. There may be a case where the first data binding source is known to be slow (see "The Internet" or "The Cloud") and a temporary value to display is needed until the intended data comes in. In other uses, it is a matter of "first in wins" where several compatible data sources are registered and the quickest response is used. This pattern is often used with caching architectures where previously saved data is displayed until a live refresh can be completed. Again, this results in positive performance increases in both greater responsiveness perceived by the user and optimized data access within the software.
    • UI virtualization - a companion pattern to data virtualization used primarily in UI/presentation layer list-style data components where only the required number of container elements needed to render the current data items on the screen are created and allocated in memory. In the example of a very large dataset being bound to a scrollable list box, there may only be, for example, 20 containers visible to the user at any given time, even though there could be thousands or more data items to navigate through. The UI virtualization pattern calls for "recycling" the same 20 visible containers over and over in memory and simply swapping out the data bindings in real-time as the user scrolls along, as opposed to constructing all the containers ahead of time resulting in a large memory footprint or creating them on the fly as needed and likely to result in rendering lags and artifacts. In applying this pattern, we are just doing our part as software architects to be "green".

shutterstock_74085931

  • Background processing, Caching, Prefetching, Streaming - additional data-oriented software engineering techniques utilized behind the scenes in support of various virtualization patterns that help optimize the performance and other aspects of the user experience. The underlying thread (pun intended) among these patterns and practices is about separating UI/presentation layer rendering and interactions from the potential complexities of logic and data services in connection with the asynchronous nature of the distributed "cloud" software environment now in the mainstream. They are also about taking advantage of the robust multi-tasking capabilities found in current industry-standard software platforms and frameworks.

Availability

The standard definitions and metrics for availability involve the concepts of fault tolerance and functional uptime affectionately referred to as "five nines" (99.[99999]%). Software architects and engineers often utilize redundancy and clustering patterns in distributed architectures to help reduce or eliminate single points of failure to ensure greater availability. There are of course scenarios where downtime simply cannot be avoided, most often due to issues beyond the control of the software itself such as the network on which the software runs (online or web applications, for example). In many cases, there are creative techniques that can be applied which help negotiate either planned or unexpected downtime inside the user experience or offer alternate paths of functionality to temporarily steer around a functional dead end. In other situations, there are applications which can intentionally be used in offline mode and automatically perform data synchronization once they become reconnected to services. Actions performed by the user in offline mode can be accepted and queued up for processing by the software which offers up the perception of being online and available. Making disconnected or partially unavailable states feel connected results in a more rewarding user experience and functional workflow in the software.

Scalability

Classically defined as the ability to achieve increased system capacity (such as more simultaneous transactions or concurrent user sessions) by adding additional hardware and software resources, the concept of scalability can also be applied in a design-oriented way as the ability to deliver richer or larger sets of functionality within the user experience over time. Somewhat distinct from extensibility in this context, there are scenarios where a UI design and supporting infrastructure needs the affordance for working with larger and wider variations within its datasets as time goes along without necessarily changing the fundamental aspects of the interaction model. A creative technologist will help a design team understand the challenge and opportunities in building a UI framework that can scale for the future.

Security

Routinely prioritized in the background behind design concerns, security is actually a very important area of consideration in the overall user experience. From a cross-discipline perspective, security is as much about the ease of use and unobtrusiveness of necessary authentication and authorization measures as it is about maintaining strong data and identity integrity while limiting exposure of attack surfaces within the software. Striking the right balance between the two concerns is a critical step in the collaborative design process. Additionally, there are scenarios where the software should intentionally portray a strong sense of security as part of the experience or the brand, such as when dealing with sensitive financial or medical data. Users should feel safe but never burdened. Security isn't black or white, not whether you have it or you don't, but rather how it is creatively applied and sensitively inserted into the experience.

Manageability

Called maintainability in the original PASSMADE framework I found a while back, I updated it to Manageability to provide a more comprehensive view on the attributes of ease of operation, transparent monitoring, and optimized data provisioning, reporting, and troubleshooting workflows. It is one thing to maintain a system that has already achieved operational equilibrium (humming along), but is another to efficiently triage and resolve issues when its starts to show cracks. The user experience angle comes into play within the topic of manageability if the software has the perception of being a house of cards to either the user when dealing with settings or preferences or the support personnel dealing with administrative tasks. How the software is instrumented by the software architect or engineer and how that data is presented in the experience can make the difference in a result that feels solid and stable (or at least easy to fix!).

Accessibility

Often linked (and rightly so) to UI/presentation layer best practices or regulated standards (Section 508 in the U.S., for example) for direct and efficient access to the software by users with disabilities, the broader topic is about universal access for as many users as possible regardless of a particular constraint or obstacle. The form factor of the device being used in connection with the software or the language and cultural specifics of the user are equal in weight to physical limitations in determining the accessibility of the experience.  It is a critical task in early design and technology requirements gathering to determine what the accessibility goals are of the experience to be delivered. For example, although an English-only Web 2.0-style e-commerce site optimized for desktop browsers may not be accessible to a Spanish-reading mobile device user, there may not be a defined business requirement to support that particular user profile in the first place. It is always good to get these questions of globalization, target devices, and standard accessibility compliance on the table as early as possible in the creative process.

Deployability

Usually addressed in software architecture as a background secondary concern related to provisioning or installation best practices, especially for IT staff working the front lines in managing hardware and software in the field, I have often expanded this topic to include user considerations such as the out-of-box experience, the update/upgrade process, and even the external distribution model of the software itself. For example, the quality, efficiency, and usability of a one-time-use install wizard as the first impression a user gets of the software is just as important as the hero screen or workflow in which the majority of time is spent.

Extensibility

Bring up this topic with a software architect or engineer and the conversation will likely lead into "gang of four" software design patterns and OOP (object-oriented programming) methodologies where the focus is on building software which is loosely coupled and flexible to handle changing requirements, new features, and unexpected usage models or extensions. Talk about it in the context of flexible and modular UI design systems with a visual or interaction designer and the discussion will center on framework aspects like component archetypes, reusable behaviors, and the overall design language of the software. Have a similar discussion with a product development manager or marketing professional and the focus will be on the ease and efficiency (minimal cost) at which additional perceived or real value can be bolted on to the experience through enhancements in future releases. In each case, the fundamental goal in delivering extensible software is to avoid delivering a black box or one-off with a limited shelf life.

Emerging software platforms and UI/presentation layer frameworks are beginning to deliver on the promise of clean separation between application tiers and within the designer to developer workflow, making the investment in greater extensibility more affordable across the software design and development process. On the other hand, making it too easy to rework the UI in a reactionary way to perceived threats or changing business conditions to rapidly push out new releases can create the potential for a disruptive experience for the user in terms of lost productivity and retraining. Just because one CAN make the software fully malleable in anticipation of marketing or product development pressures doesn't mean one SHOULD in every case, especially from a development ROI perspective. For example, building in a completely extensible styling framework for the entire software UI/presentation layer in order to change out a logo and basic color scheme may not be the most wise allocation of resources. Over-architecting, over-engineering, or serially obsessive refactoring in the name of extensibility is common practice found in many software death marches. A creative technologist or software architect experienced in the rapid innovation and realization process (call it agile, extreme, iterative, whatever) will instinctively know where to draw the line.

Summary

In summary, these classic software architecture and engineering principles and best practices represented by the PASSMADE framework have a natural translation path and corresponding place in the collaborative software design process. The key to using this framework is having someone or some group with both design sensibilities and technology fundamentals providing the bridge across disciplines and stakeholders while also championing the perspective that successful user experiences are delivered as much in the background systems and services as they are in the front-end UI/presentation layers of software architectures.

Robert Tuttle is Chief Architect at frog design. This article originally appeared in the Well-Formed blog on Design Mind.

 

Leave a Reply

Your email address will not be published. Required fields are marked *