Early this summer, I was having lunch with a friend from high school who’s also a Computer Science major. We attend different universities, and - judging from what he’s told me, the differences in our experiences are obvious.  We talked about the qualities employers look for. I eventually remarked that even applicants without professional experience may find employers who favor the simple concepts taught in school to more technical prowess. “What kinds of simple concepts?” he asked.
I told him how Source One valued unit testing experience. I also mentioned my roommate who got a software engineering internship at a telecommunications firm by simply mentioning unit/automated testing.  The recruiter said something like, “Wow, no other candidate has discussed this before.” My friend asked, “What’s unit testing?”

In a nutshell: Unit testing is the process that confirms your program is generally working as intended.

In my CS program, virtually every homework assignment contains a unit testing component, and we devote approximately half the time it takes to complete them to constructing test cases and correcting errors.  Our grades are contingent upon this process.  I asked my friend how he knows his programs are working. He replied, “Well, we just try to fulfill all the assigned requirements.” As the conversation continued, I learned that his curriculum is not unique in the academic CS world.  Many schools do not teach unit testing as a standard practice. In my opinion, this is completely unacceptable.

Let’s take a step back. There’s no maxim that says, ‘If you have experience in unit testing, you are a great software engineer.’ In theory and in practice, however, it just makes sense.
Software engineering is still in its youth. It’s useless to compare it with fields that possess countless archives of standards and practices. The lack of conventions leads many projects and businesses to fall apart.  This is why software engineers and executives need to take care. According to the Standish Group’s 2015 CHAOS Report, about 20% of software initiatives fail, and over 50% are “challenged.”

Their research also shows a staggering 31.1% of projects are canceled before completion and 52.7% of projects will cost 189% of their original estimates. The cost of these failures and overruns are just the tip of the iceberg. The lost opportunity costs are not measurable, but could easily number in the trillions of dollars. Just look at the City of Denver and you’ll realize the extent of this problem. Failure to produce reliable software for baggage at Denver’s new airport is costing the city $1.1 million per day.

Failure typically stems from inconsistent executive involvement, unclear communication of requirements, and insufficient user feedback. It’s important to remember that the slightest error can provoke financial and legal mayhem.  It can even kill. Take Therac-25 for example, a radiation therapy machine used in the 1980s. The engineers who designed and coded the software behind it were overconfident in their abilities and failed to properly test their software. As a result, serious bugs occurred, and some patients were administered 100 times the appropriate dosage.
Fortunately, developers and executives are growing more in-tune with test driven development, or TDD. In my experience, unit testing is just a subset of TDD, and it looks something like this:

1. Understand the requirements.
2. Make tests for the requirements by trying to break the software’s concept with significant inputs.
3. Write the software.
4. Test the software.
5. If any test fails, fix the software.

The process should begin before any software is produced. We call this ‘black box testing’ and it’s why I tend not to think, “How can I make the software?” but rather “How can I break the software?”
Opponents, however, fear that extensive testing (if you consider this extensive) doesn’t contribute to a positive ROI. Even Edward Dijkstra, a computer scientist famous for developing the “shortest path algorithm” popular in universities, said that testing can confirm the presence of bugs, but won’t prove their absence. For practical purposes, I’d like to posit that his statement has no value. This is why we have the 80/20 rule: 80% of bugs can be fixed or prevented by eliminating 20% of their causes. Catching bugs early always helps development run more smoothly. In the future, TDD needs to become the gold standard for software engineering. We’ve already seen the failures of “cowboy coding” and its lack of accountability.

The demand for programmers and software engineers is obvious. During the dotcom boom, about 40,000 computer science degrees were granted each year. After faltering slightly, it boomed again in the mid-2000s to about 60,000 degrees. Now we’re back at dotcom levels. I can assure you most of these people are not trying to get into academia.

And it’s not just CS majors who are on the rise. More and more people are becoming interested in programming and either take a few college courses, or teach themselves. These individuals hope to apply their new knowledge in the professional world. However, let’s make one thing clear. Programming is just a subset of software engineering. You can possess all the technical skills in the world, but if you can’t perform the managerial, communicative, logical, and analytical duties necessary, you might become a liability to your eventual employer.

Share To:

Unknown

Post A Comment:

0 comments so far,add yours