Election season is here (these days, does election season really ever end?), and we’ve already had our first controversy. A few weeks ago, Iowa took its traditional position as the first state to hold caucuses, with the winner aiming to leverage that momentum and earn their party’s nomination.
But this year was different.
The 2020 Iowa caucuses featured a flawed mobile application that was supposed to simplify reporting caucus results but instead threw the process into chaos, forcing the Iowa Democratic Party to lean on paper ballot backups. Precinct chairs were supposed to use the app to calculate and report results from each round of voting — presumably faster and more accurately than before — but many of them struggled to download the app.
Keyword: presumably. Unfortunately, as we now all know, the app was nothing short of a disaster. First, the Iowa caucuses’ 2020 app could only be downloaded through two app-testing sites, TestFlight and TestFairy. On this alone, red flags should have already been furiously waving. Forcing users to download the app through a different channel — a difficult and nonstandard process — likely meant the developers weren’t finished in time to get the app stores’ approval. In addition to the accessibility issue, for the select few individuals who were able to download the app, it remained glitchy once downloaded.
As a result, for a number of days, no candidate could claim victory and no one had any real sense of how they performed. With the New Hampshire primaries looming and televised debates already scheduled, this app threw a serious monkey wrench into an already heavily scrutinized election process. Let’s take a look at why the app failed and why this massive failure underscores the importance of software testing.
Contents
Was Inadequate Software Testing the Reason the App Failed?
From a design standpoint, development should have been pretty straightforward. The functions of the app were not complicated, but it appears the app lacked testing. At the very least, there needed to be several different versions of the same app for disparate operating systems and devices. Further, all of these versions had to work perfectly for their one and only use — there was not going to be time to fix user issues after launch. And the consequences for getting any aspect of it wrong, as we now know, are quite dramatic — and public.
As noted, it appears the app was never fully tested. Whether that speaks to a lack of technical maturity on the part of the app developer or its management team, a full statewide rollout doesn’t seem to have been given a proper test run. Additionally, any issues with back-end integration never had the chance to come to light. Knowing what we now know, I’m willing to bet the back-end system collecting the caucus data was equally flawed.
A Wall Street Journal article revealed Shadow, the company behind the 2020 Iowa caucuses app, was still completing the app the weekend before the caucuses. More red flags (honestly, red flag suppliers should be running out at this point). According to multiple reports — and perhaps to the company’s slight defense — Shadow only got two months to build its app. That’s not even enough time to effectively test an app of this magnitude, let alone develop, test and successfully deploy it on a broad scale. Because all bugs have to be resolved before the release, the rush to have the application ready on time resulted in disaster. While Shadow perhaps tried to do the impossible, this is clearly a case of a client needing to be told “no” rather than delivering a not-ready-for-prime-time product — especially one so important.
Additionally, Shadow’s app appeared to be built on a tight budget. Public records show Shadow was paid about $63,000 by the Iowa Democratic Party and $58,000 by the Nevada Democratic Party. For such a critical application, we envisioned a price tag of at least $150,000 to $200,000 — with most of that cost going toward testing.
With more time and resources spent testing, one can assume that the bug responsible for this debacle would have been discovered and fixed. Further, do we even know at this point if we’ve seen all the issues? What about data security, final count accuracy and more?
Best Practices for Software Testing — The Way We Do It At Cardinal Peak
Thorough software testing is crucially important for quality assurance, and best practices in software testing can result in software that does what it says it’s going to do.
Focus on User Experience
The most important thing, at the end of the day, is usability. If users can’t use the app, as was the case with the Iowa caucuses app, it’s essentially worthless. Remember, this is an app that no one had used before and which users were unlikely to use again for at least a couple of years. As such, it’s important to answer the following questions during the test process:
-
- Can users, well, use the software?
- Are users having issues using it? If so, how do they get help for those issues?
- Does the software do what it’s supposed to?
At Cardinal Peak, we help customers refine requirements, assess technical feasibility and understand the level of investment required to launch an application. We have a laser focus on the user, and every step we take and every engineer that touches the project moves the project toward that goal.
Software Testing Starts at the Beginning of Development
Testing is an integral part of the software development process. By getting testers involved as active participants in the requirements capture process, developers can tailor a testing approach that makes sense both with respect to the development as well as customer requirements.
Testing is engineered into the Cardinal Peak process from day one. Our QA team — comprised of more than 20 professionals — works continuously throughout each project, finding and correcting any issues early on, which eliminates any surprises before it’s too late.
Integrate, Integrate, Integrate
One of the most often overlooked yet critical parts of building something that actually works is integration. While independent pieces of an application could work great on their own, when they’re put together, they may not work as well. That underscores the importance of integration — not only for features but of your development team.
Building a wall between testers and developers doesn’t help anyone. It creates disconnects and ensures testers don’t feel they are a part of the team responsible for delivering a successful app. Our testers are intimately part of the team and are validating as features are completed. We would much rather deal with integration problems upfront, ultimately saving us from serious issues down the line. At Cardinal Peak, the code for any feature isn’t finished until it’s validated, integrated and actively running in the full final application.
Testing Management and Reporting
Throughout the test process, it’s important to answer questions like what does progress look like, where do roadblocks exist and what is the overall status of desired results? Despite comprehensive requirements, questions inevitably arise during a project. A thorough test management and reporting process goes a long way in ensuring all parties are on the same page as they work toward the end goal.
Our process involves weekly status reports, which include statements about technical progress, percentage completion on major milestones and budget, addressing any concerns the customer should have. Again, no system is “finished” until we check that it meets specifications and fulfills its intended purpose.
What to Ask Developers/Look Out For
When it comes to mobile app development, customers should always ask about any previous apps a developer has deployed to various app stores. If a developer has successfully deployed an app, the fact that they’ve gone through the process of putting something through Google’s or Apple’s validation requirements to get it up on the app store, successfully deployed and distributed their app, that’s a big win. Successful deployment is priority number one.
Further, ask about any difficulties or challenges that the developer had in deploying their app. Their answer will speak to their level of experience. Additionally, it’s important to compare prices and make sure you aren’t taking the highest or lowest priced vendor. If a price tag sounds too good to be true, it probably is. In the end, even though Iowa’s tally-by-app experiment failed, the fiasco can teach us a few lessons about the importance of software testing. Don’t build it so fast that it’s not being built right. And it’s not enough to just test your app — you must verify its operating efficacy in production, with all the integrated systems, while simulating expected levels of usage.
At the end of the day, if, like Shadow, you don’t have a lot of testers or aren’t rigorously testing your software, you aren’t making anything that’s production-ready. Our team’s extensive software development experience ensures we’re able to help you overcome even the most challenging issues. If you have any questions or would like to reduce the risk surrounding an upcoming software project, connect with us today.