This article was originally posted on the State of the Edge blog.
Edge computing is an inevitable evolution in data processing, transmission and storage. To make the best use of this infrastructure, developing the right practices and processes around the many new capabilities enabled by edge computing has become a key issue. At the core of these processes are the challenges in developing, clarifying and agreeing to user requirements.
Can’t We Just Build It?
Much to the chagrin of eager developers and time-pressured project managers everywhere, taking the time to properly develop and clarify a comprehensive list of user requirements has been shown to result in higher-quality solutions that are much more likely to satisfy the user.
The reason for this is simple: it’s harder than it seems to properly communicate an idea, and nowhere is this clearer than when that idea is a complex and multi-faceted system. We have all seen projects that seemed to go around in circles, or which came out the other side resembling something very different than their original intent. Often, this is a failure to clarify requirements.
However, there is often a tension in development projects to ‘start doing something’ during the initial phases of a project, before these requirements are fully-formed. At times this can help to reduce the overall timeline for development; but if this effort is directed to areas that are heavily influenced by the user requirements, it can result in a ‘code wins’ situation where the effort put into that development outweighs the user requirement, resulting in a solution that’s a poor fit.
Real Users and Real Requirements
The complexity and number of user requirements for a project can vary greatly depending on many different factors. In some cases the number of requirements can reach the thousands, and in others may struggle to fill a single sheet of paper. Like every stage in the development process, there is finite time and resources available for requirement definition, and it’s often squeezed in preference of creating code. However, even simple requirements help immensely.
Ideally, requirements are gathered directly from the end users of a solution using questions and narrative structures that lead them to give the desired level of feedback. But there’s a problem: people are often poor at determining and communicating what they want from a product, even one which performs workflow functions that they do several times every day. It’s very common for feedback to focus on elements of the UI rather than the flow or end functionality of a product.
This creates a tension between actual stated user requirements, and the interpretation of the underlying and unstated user wants and needs that drive how users think about and interact with the product. Solving this conundrum and crystalizing the result in clear requirements is often what turns an okay piece of software into a great one, and this no different at the edge.
Edge-specific Requirements
Software projects targeting the edge, whether edge-native or edge-enhanced applications should follow the same structured requirements definition process as any other solution. However, due to the unique capabilities of edge computing, there are a few considerations to bear in mind that tend to uniquely influence edge software projects and their requirements:
1. Latency
As many applications and services at the edge are latency-critical, the total end-to-end latency as well as the latency of individual processes or tasks is a key consideration for many projects. ‘Lower latency’ is in most cases not specific enough to be a solid requirement. Have specific latency bounds been identified for the application? Do different tasks within the application have varying latency requirements? What is the impact on the application of unacceptable latency?
Additionally, latency is often conflated with jitter, which is a measurement of the per-packet or per-operation latency variance that the user experiences. In many cases jitter can be minimised at the expense of increased end-to-end latency, which is a design choice the development team can make only if they have a clear set of requirements, leading to a far better user experience.
2. Instancing
With distributed edge data center infrastructure comes a greater choice in the locations where parts of the application are operating. Do your user requirements or your own operational limits impose a constraint on the number of application instances you can operate concurrently? Are there location-specific functions that need to run in specific places that then interface with other?
3. Data sovereignty
Edge data center infrastructure also allows for greater flexibility in where data is processed and stored. An entire application may operate without having to transmit user data outside of the city, which allows for more granular levels of data sovereignty to be supported. Are you developing for an area such as healthcare or video surveillance that may be subject to increasingly tight data sovereignty regulations in the future, where this level of granularity can make or break it?
Wrapping Up
Applications and services operating at the edge benefit just as much as other software projects from robust and clear requirements that drive their development from inception to completion. The edge introduces new development and operational considerations to be addressed as well.
Users and their real requirements at the edge will be a key theme covered in multiple sessions at Edge Computing World December 9th – 12th at the Computer History Museum, Silicon Valley. We hope that you’ll be able to join us for the event and be a part of all of the discussions there.
See the event website for more information: https://www.edgecomputingworld.com/