White Papers in Software Engineering
Software and System Safety Research Group: Industry and government are currently struggling with building complex, computer-controlled systems, and often unsuccessfully as witnessed by failures of major projects. We envision the MIT Center for Software Research as a place where academia, industry, and government can come together to focus on stretching the limits of the complexity of the systems we can successfully engineer.
|Software Development Life Cycle (SDLC): As in any other engineering disciplines, software engineering also has some structured models for software development. This document will provide you with a generic overview about different software development methodologies adopted by contemporary software firms. Read about software development models here to know more about the Software Development Life Cycle (SDLC).
Following are the basic popular models used by many software development firms.
a) System Development Life Cycle (SDLC) Model
b) Prototyping Model
c) Rapid Application Development Model
d) Component Assembly Model
System Development Life Cycle Model (SDLC Model)
This is also known as Classic Life Cycle Model (or) Linear Sequential Model (or) Waterfall Method. This has the following activities.
1. System/Information Engineering and Modeling
2. Software Requirements Analysis
3. Systems Analysis and Design
4. Code Generation
White Papers - Test Automation
- How to Automate Testing of Graphical User Interfaces: This lecture discusses strengths and weaknesses of commercially available Capture-and-Replay GUI testing tools (CR-Tools) and presents a pragmatic and economic approach for testing Graphical User Interfaces using such tools. The results presented were developed within the ESSI Process Improvement Experiment (PIE) 24306 [EU1], [EU2] in 1997/98 at imbus GmbH, Germany [im1].
Today's software systems usually feature Graphical User Interfaces (GUI's). Because of the varied possibilities for user interaction and the number of control elements (buttons, pull-down menus, toolbars, etc.) available with GUI's, their testing is extremely time-consuming and costly. Manual testing of GUI's is labor-intensive, frequently monotonous, and not well liked by software engineers or software testers. A promising remedy is offered by automation , and several tools for computer-based testing of GUI's are already commercially available.
- Improving the Maintainability of Automated Test Suites: Automated black box, GUI-level regression test tools are popular in the industry. According to the popular mythology, people with little programming experience can use these tools to quickly create extensive test suites. The tools are (allegedly) easy to use. Maintenance of the test suites is (allegedly) not a problem. Therefore, the story goes, a development manager can save lots of money and aggravation, and can ship software sooner, by using one of these tools to replace some (or most) of those pesky testers.
- Software Test Automation and the Product Lifecycle : A product's stages of development are referred to as the product life cycle (PLC). There is considerable work involved in getting a product through its PLC. Software testing at many companies has matured as lessons have been learned about the most effective test methodologies. Still, there is a great difference of opinion about the implementation and effectiveness of automated software testing and how it relates to the PLC.
- Test Automation Frameworks In today’s environment of plummeting cycle times, test automation becomes an increasingly critical and strategic necessity. Assuming the level of testing in the past was sufficient (which is rarely the case), how do we possibly keep up with this new explosive pace of web-enabled deployment while retaining satisfactory test coverage and reducing risk? The answer is either more people for manual testing, or a greater level of test automation. After all, a reduction in project cycle times generally correlates to a reduction of time for test.
White Papers - Manual Testing
- Facts and Myths about Test Automation: Today software test automation is becoming more and more popular in both C/S and web environment. As the requirements keep changing (mostly new requirements are getting introduced on daily basis) constantly and the testing window is getting smaller and smaller everyday, the managers are realizing a greater need for test automation. This is good news for us (people who do test automation). But, I am afraid this is the only good news.
- When Should a Test Be Automated? :I want to automate as many tests as I can. I’m not comfortable running a test only once.
What if a programmer then changes the code and introduces a bug? What if I don’t catch
that bug because I didn’t rerun the test after the change? Wouldn’t I feel horrible?
Well, yes, but I’m not paid to feel comfortable rather than horrible. I’m paid to be costeffective.
It took me a long time, but I finally realized that I was over-automating, that
only some of the tests I created should be automated. Some of the tests I was automating
not only did not find bugs when they were rerun, they had no significant prospect of doing
so. Automating them was not a rational decision.
The question, then, is how to make a rational decision. When I take a job as a contract
tester, I typically design a series of tests for some product feature. For each of them, I
need to decide whether that particular test should be automated. This paper describes
how I think about the tradeoffs.
White Papers - Agile Testing
- Continuous Integration by Martin Fowler: Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly. This article is a quick overview of Continuous Integration summarizing the technique and its current usage.
- Context Driven Testing:
Context-driven software testing is a set of values about test methodology. It is not itself a test technique. To be a context-driven tester is to approach each testing situation as if it were unique in important ways, and to develop the skills to react to situations with a broad and deep awareness of problems in projects and possible testing-related solutions to those problems.
- The Seven Basic Principles of the Context-Driven School
- The value of any practice depends on its context.
- There are good practices in context, but there are no best practices.
- People, working together, are the most important part of any project's context.
- Projects unfold over time in ways that are often not predictable.
- The product is a solution. If the problem isn't solved, the product doesn't work.
- Good software testing is a challenging intellectual process.
- Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.
- Success with Test Automation: This paper describes several principles for test automation. These principles were used to develop a system of automated tests for a new family of client/server applications at BMC Software. This work identifies the major concerns when staffing test automation with testers, developers or contractors. It encourages applying standard software development processes to test automation. It identifies criteria for selecting appropriate tests to be automated and advantages of a Testcase Interpreter. It describes how cascading failures prevent unattended testing. It identifies the most serious bug that can affect test automation systems and describes ways to avoid it. It circumscribes reasonable limits on test automation goals.
White Papers – Other
- 16 Critical Software Practices: This paper outlines the 16 Critical Software PracticesTM that serve as the basis for implementing effective performance-based management of software-intensive projects. They are intended to be used by programs desiring to implement effective high-leverage practices to improve their bottom-line measures-time to fielding, quality, cost, predictability, and customer satisfaction-and are for CIOs, PMs, sponsoring agencies, software project managers, and others involved in software engineering.
- Classic Testing Mistakes Romances with coverage don't seem to end with the former devotee wanting to be "just good friends". When, at the end of a year's use of coverage, it has not solved the testing problem, I find testing groups . That's a shame. When I test, I spend somewhat less than 5% of my time looking at coverage results, rethinking my test design, and writing some new tests to correct my mistakes. It's time well spent.
Some Classic Testing Mistakes
The role of testing
- Thinking the testing team is responsible for assuring quality.
- Thinking that the purpose of testing is to find bugs.
- Not finding the important bugs.
- Not reporting usability problems.
- No focus on an estimate of quality (and on the quality of that estimate).
- Reporting bug data without putting it into context.
- Starting testing too late (bug detection, not bug reduction)
Planning the complete testing effort
- A testing effort biased toward functional testing.
- Underemphasizing configuration testing.
- Putting stress and load testing off to the last minute.
- Not testing the documentation
- Not testing installation procedures.
- An overreliance on beta testing.
- Finishing one testing task before moving on to the next.
- Failing to correctly identify risky areas.
- Sticking stubbornly to the test plan.
- Using testing as a transitional job for new programmers.
- Recruiting testers from the ranks of failed programmers.
- Testers are not domain experts.
- Not seeking candidates from the customer service staff or technical writing staff.
- Insisting that testers be able to program.
- A testing team that lacks diversity.
- A physical separation between developers and testers.
- Believing that programmers can't test their own code.
- Programmers are neither trained nor motivated to test.
The tester at work
- Paying more attention to running tests than to designing them.
- Unreviewed test designs.
- Being too specific about test inputs and procedures.
- Not noticing and exploring "irrelevant" oddities.
- Checking that the product does what it's supposed to do, but not that it doesn't do what it isn't supposed to do.
- Test suites that are understandable only by their owners.
- Testing only through the user-visible interface.
- Poor bug reporting.
- Adding only regression tests when bugs are found.
- Failing to take notes for the next testing effort.
- Attempting to automate all tests.
- Expecting to rerun manual tests.
- Using GUI capture/replay tools to reduce test creation cost.
- Expecting regression tests to find a high proportion of new bugs.
- Embracing code coverage with the devotion that only simple numbers can inspire.
- Removing tests from a regression test suite just because they don't add coverage.
- Using coverage as a performance goal for testers.
- Abandoning coverage entirely.
Top Ten Tips for Bug Tracking
- A good tester will always try to reduce the repro steps to the minimal steps to reproduce; this is extremely helpful for the programmer who has to find the bug.
- Remember that the only person who can close a bug is the person who opened it in the first place. Anyone can resolve it, but only the person who saw the bug can really be sure that what they saw is fixed.
- There are many ways to resolve a bug. FogBUGZ allows you to resolve a bug as fixed, won't fix, postponed, not repro, duplicate, or by design.
- Not Repro means that nobody could ever reproduce the bug. Programmers often use this when the bug report is missing the repro steps.
- You'll want to keep careful track of versions. Every build of the software that you give to testers should have a build ID number so that the poor tester doesn't have to retest the bug on a version of the software where it wasn't even supposed to be fixed.
- If you're a programmer, and you're having trouble getting testers to use the bug database, just don't accept bug reports by any other method. If your testers are used to sending you email with bug reports, just bounce the emails back to them with a brief message: "please put this in the bug database. I can't keep track of emails."
- If you're a tester, and you're having trouble getting programmers to use the bug database, just don't tell them about bugs - put them in the database and let the database email them.
- If you're a programmer, and only some of your colleagues use the bug database, just start assigning them bugs in the database. Eventually they'll get the hint.
- If you're a manager, and nobody seems to be using the bug database that you installed at great expense, start assigning new features to people using bugs. A bug database is also a great "unimplemented feature" database, too.
- Avoid the temptation to add new fields to the bug database. Every month or so, somebody will come up with a great idea for a new field to put in the database. You get all kinds of clever ideas, for example, keeping track of the file where the bug was found; keeping track of what % of the time the bug is reproducible; keeping track of how many times the bug occurred; keeping track of which exact versions of which DLLs were installed on the machine where the bug happened. It's very important not to give in to these ideas. If you do, your new bug entry screen will end up with a thousand fields that you need to supply, and nobody will want to input bug reports any more. For the bug database to work, everybody needs to use it, and if entering bugs "formally" is too much work, people will go around the bug database.
If you are developing code, even on a team of one, without an organized database listing all known bugs in the code, you are simply going to ship low quality code. On good software teams, not only is the bug database used universally, but people get into the habit of using the bug database to make their own "to-do" lists, they set their default page in their web browser to the list of bugs assigned to them, and they start wishing that they could assign bugs to the office manager to stock more Mountain Dew.