mirror of
https://github.com/amyjko/cooperative-software-development
synced 2025-01-14 08:01:05 +01:00
Fixed #171.
This commit is contained in:
parent
57d3de3896
commit
81e358090e
6 changed files with 8 additions and 8 deletions
|
@ -2,7 +2,7 @@ Despite all of your hard work at design, implementation, and verification, your
|
|||
|
||||
# What is debugging?
|
||||
|
||||
Before we can talk about debugging, it's first important to consider what counts as a "bug". This term, which according to computer science mythology, first emerged from actual insects finding their way into the vacuum tubes of mainframes, is actually quite vague. Is the "bug" the incorrect code? Is it the faulty behavior that occurs at runtime when that incorrect code executes? Is it the problem that occurs in the world when the software misbehaves, such as a program crashing, or an operating system hanging? "*Bug*" actually refers to all of things things, which makes it a colloquial, but imprecise term.
|
||||
Before we can talk about debugging, it's first important to consider what counts as a "bug". This term is thought to have emerged in the late 19th century to describe any problem with a machine. Grace Hopper's team reported then [found it amusing|https://en.wikipedia.org/wiki/Bug_(engineering)#History] when they found an actual moth in an electromechanical relay in their early computer. And yet, the term is actually quite vague. Is the "bug" the incorrect code? Is it the faulty behavior that occurs at runtime when that incorrect code executes? Is it the problem that occurs in the world when the software misbehaves, such as a program crashing, or an operating system hanging? "*Bug*" actually refers to all of things things, which makes it a colloquial, but imprecise term.
|
||||
|
||||
To clarify things, consider four definitions<ko05b>.
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
Computers haven't been around for long. If you read one of the many histories of computing and information, such as James Gleick's _The Information_<gleick11>, Jonathan Grudin's _From Tool to Partner: The Evolution of Human-Computer Interaction_<grudin17>, or Margo Shetterly's _Hidden Figures_<shetterly17>, you'll learn that before _digital_ computers, computers were people, calculating things manually. And that _after_ digital computers, programming wasn't something that many people did. It was reserved for whoever had access to the mainframe and they wrote their programs on punchcards like the one above. Computing was in no way a ubiquitous, democratized activity--it was reserved for the few that could afford and maintain a room-sized machine.
|
||||
Computers haven't been around for long. If you read one of the many histories of computing and information, such as James Gleick's _The Information_<gleick11>, Jonathan Grudin's _From Tool to Partner: The Evolution of Human-Computer Interaction_<grudin17>, or Margo Shetterly's _Hidden Figures_<shetterly17>, you'll learn that before _digital_ computers, computers were people, calculating things manually. And that _after_ digital computers, programming wasn't something that many people did. It was reserved for whoever had access to the mainframe and they wrote their programs on punchcards. Computing was in no way a ubiquitous, democratized activity--it was reserved for the few that could afford and maintain a room-sized machine.
|
||||
|
||||
Because programming required such painstaking planning in machine code and computers were slow, most programs were not that complex. Their value was in calculating things faster than a person could do by hand, which meant thousands of calculations in a minute rather than one calculation in a minute. Computer programmers were not solving problems that had no solutions yet; they were translating existing solutions (for example, a quadratic formula) into machine instructions. Their power wasn't in creating new realities or facilitating new tasks, it was accelerating old tasks.
|
||||
|
||||
|
|
|
@ -19,13 +19,13 @@ It's also hard to monitor performance without actually _harming_ performance. Ma
|
|||
|
||||
Monitoring for data breaches, identity theft, and other security and privacy concerns are incredibly important parts of running a service, but also very challenging. This is partly because the tools for doing this monitoring are not yet well integrated, requiring each team to develop its own practices and monitoring infrastructure. But it's also because protecting data and identity is more than just detecting and blocking malicious payloads. It's also about recovering from ones that get through, developing reliable data streams about application network activity, monitoring for anomalies and trends in those streams, and developing practices for tracking and responding to warnings that your monitoring system might generate. Researchers are still actively inventing more scalable, usable, and deployable techniques for all of these activities.
|
||||
|
||||
The biggest limitation of the monitoring above is that it only reveals _what_ people are doing with your software, not _why_ they are doing it, or why it has failed. Monitoring can help you know that a problem exists, but it can't tell you why a program failed or why a persona failed to use your software successfully.
|
||||
The biggest limitation of the monitoring above is that it only reveals _what_ people are doing with your software, not _why_ they are doing it, or why it has failed. Monitoring can help you know that a problem exists, but it can't tell you why a program failed or why a person failed to use your software successfully.
|
||||
|
||||
# Discovering Missing Requirements
|
||||
|
||||
Usability problems and missing features, unlike some of the preceding problems, are even harder to detect or observe, because the only true indicator that something is hard to use is in a user's mind. That said, there are a couple of approaches to detecting the possibility of usability problems.
|
||||
|
||||
One is by monitoring application usage. Assuming your users will tolerate being watched, there are many techniques: 1) automatically instrumenting applications for user interaction events, 2) mining events for problematic patterns, and 3) browsing and analyzing patterns for more subjective issues<ivory01>. Modern tools and services like make it easier to capture, store, and analyze this usage data, although they still require you to have some upfront intuition about what to monitor. More advanced, experimental techniques in research automatically analyze undo events as indicators of usability problems<akers09> this work observes that undo is often an indicator of a mistake in creative software, and mistakes are often indicators of usability problems.
|
||||
One is by monitoring application usage. Assuming your users will tolerate being watched, there are many techniques: 1) automatically instrumenting applications for user interaction events, 2) mining events for problematic patterns, and 3) browsing and analyzing patterns for more subjective issues<ivory01>. Modern tools and services like make it easier to capture, store, and analyze this usage data, although they still require you to have some upfront intuition about what to monitor. More advanced, experimental techniques in research automatically analyze undo events as indicators of usability problems<akers09>. This work observes that undo is often an indicator of a mistake in creative software, and mistakes are often indicators of usability problems.
|
||||
|
||||
All of the usage data above can tell you _what_ your users are doing, but not _why_. For this, you'll need to get explicit feedback from support tickets, support forums, product reviews, and other critiques of user experience. Some of these types of reports go directly to engineering teams, becoming part of bug reporting systems, while others end up in customer service or marketing departments. While all of this data is valuable for monitoring user experience, most companies still do a bad job of using anything but bug reports to improve user experience, overlooking the rich insights in customer service interactions<chilana11>.
|
||||
|
||||
|
|
|
@ -35,7 +35,7 @@ While all of this research has strong implications for practice, industry has la
|
|||
|
||||
Note that none of these had any empirical evidence to back them. Moreover, Beck described in his original proposal that these ideas were best for "_outsourced or in-house development of small- to medium-sized systems where requirements are vague and likely to change_", but as industry often does, it began hyping it as a universal solution to software project management woes and adopted all kinds of combinations of these ideas, adapting them to their existing processes. In reality, the value of XP appears to depend on highly project-specific factors<müller03>, while the core ideas that industry has adopted are valuing feedback, communication, simplicity, and respect for individuals and the team<sharp04>. Researchers continue to investigate the merits of the list above; for example, numerous studies have investigated the effects of pair programming on defects, finding small but measurable benefits<dibella13>.
|
||||
|
||||
At the same time, Beck began also espousing the idea of ["Agile" methods|http://agilemanifesto.org/], which celebrated many of the values underlying Extreme Programming, such as focusing on individuals, keeping things simple, collaborating with customers, and being iterative. This idea of begin agile was even more popular and spread widely in industry and research, even though many of the same ideas appeared much earlier in Boehm's work on the Spiral model. Researchers found that Agile methods can increase developer enthusiasm<syedabdullah06>, that agile teams need different roles such as Mentor, Co-ordinator, Translator, Champion, Promoter, and Terminator<hoda10>, and that teams are combing agile methods with all kinds of process ideas from other project management frameworks such as [Scrum|https://en.wikipedia.org/wiki/Scrum_(software_development)] (meet daily to plan work, plan two-week sprints, maintain a backlog of work) and Kanban (visualize the workflow, limit work-in-progress, manage flow, make policies explicit, and implement feedback loops)<albaik15>. Research has also found that transitioning a team to Agile methods is slow and complex because it requires everyone on a team to change their behavior, beliefs, and practices<hoda17>.
|
||||
At the same time, Beck began also espousing the idea of ["Agile" methods|http://agilemanifesto.org/], which celebrated many of the values underlying Extreme Programming, such as focusing on individuals, keeping things simple, collaborating with customers, and being iterative. This idea of being agile was even more popular and spread widely in industry and research, even though many of the same ideas appeared much earlier in Boehm's work on the Spiral model. Researchers found that Agile methods can increase developer enthusiasm<syedabdullah06>, that agile teams need different roles such as Mentor, Co-ordinator, Translator, Champion, Promoter, and Terminator<hoda10>, and that teams are combing agile methods with all kinds of process ideas from other project management frameworks such as [Scrum|https://en.wikipedia.org/wiki/Scrum_(software_development)] (meet daily to plan work, plan two-week sprints, maintain a backlog of work) and Kanban (visualize the workflow, limit work-in-progress, manage flow, make policies explicit, and implement feedback loops)<albaik15>. Research has also found that transitioning a team to Agile methods is slow and complex because it requires everyone on a team to change their behavior, beliefs, and practices<hoda17>.
|
||||
|
||||
Ultimately, all of this energy around process ideas in industry is exciting, but disorganized. None of these efforts really get to the core of what makes software projects difficult to manage. One effort in research to get to this core by contributing new theories that explain these difficulties. The first is Herbsleb's *Socio-Technical Theory of Coordination (STTC)*. The idea of the theory is quite simple: _technical dependencies_ in engineering decisions (e.g., this function calls this other function, this data type stores this other data type) define the _social constraints_ that the organization must solve in a variety of ways to build and maintain software<herbsleb03,herbsleb16>. The better the organization builds processes and awareness tools to ensure that the people who own those engineering dependencies are communicating and aware of each others' work, the fewer defects that will occur. Herbsleb referred this alignment as _sociotechnical congruence_, and conducted a number of studies demonstrating its predictive and explanatory power.
|
||||
|
||||
|
|
|
@ -26,4 +26,4 @@ A different way to think about productivity is to consider it from a "waste" per
|
|||
|
||||
One could imagine using these concepts to refine processes and practices in a team, helping both developers and managers be more aware of sources of waste that harm productivity.
|
||||
|
||||
Of course, productivity is not only shaped by professional and organizational factors, but personal ones as well. Consider, for example, an engineer that has friends, wealth, health care, health, stable housing, sufficient pay, and safety: they likely have everything they need to bring their full attention to their work. In contrast, imagine an engineer that is isolated, has immense debt, has no health care, has a chronic disease like diabetes, is being displaced from an apartment by gentrification, has lower pay than their peers, or does not feel safe in public. Any one of these factors might limit an engineer's ability to be productive at work; some people might experience multiple, or even all of these factors, especially if they are a person of color in the United States, who has faced a lifetime of racist inequities in school, health care, and housing. Because of the potential for such inequities to influence someone's ability to work, managers and organizations need to make space for surfacing these inequities at work, so that teams can acknowledgement them, plan around them, and ideally address them through targeted supports. Anything less tends to make engineers feel unsupported, which will only decrease their motivation to contribute to a team. These widely varying conceptions of productivity reveal that programming in a software engineering context is about far more than just writing a lot of code. It's about coordinating productively with a team, synchronizing your work with an organizations goals, and most importantly, reflecting on ways to change work to achieve those goals more effectively.
|
||||
Of course, productivity is not only shaped by professional and organizational factors, but personal ones as well. Consider, for example, an engineer that has friends, wealth, health care, health, stable housing, sufficient pay, and safety: they likely have everything they need to bring their full attention to their work. In contrast, imagine an engineer that is isolated, has immense debt, has no health care, has a chronic disease like diabetes, is being displaced from an apartment by gentrification, has lower pay than their peers, or does not feel safe in public. Any one of these factors might limit an engineer's ability to be productive at work; some people might experience multiple, or even all of these factors, especially if they are a person of color in the United States, who has faced a lifetime of racist inequities in school, health care, and housing. Because of the potential for such inequities to influence someone's ability to work, managers and organizations need to make space for surfacing these inequities at work, so that teams can acknowledge them, plan around them, and ideally address them through targeted supports. Anything less tends to make engineers feel unsupported, which will only decrease their motivation to contribute to a team. These widely varying conceptions of productivity reveal that programming in a software engineering context is about far more than just writing a lot of code. It's about coordinating productively with a team, synchronizing your work with an organizations goals, and most importantly, reflecting on ways to change work to achieve those goals more effectively.
|
||||
|
|
|
@ -2,13 +2,13 @@ Once you have a problem, a solution, and a design specification, it's entirely r
|
|||
|
||||
It depends. This mentality towards product design works fine if building and deploying something is cheap and getting feedback has no consequences. Simple consumer applications often benefit from this simplicity, especially early stage ones, because there's little to lose. For example, if you are starting a company, and do not even know if there is a market opportuniity yet, it may be worth quickly prototyping an idea, seeing if there's interest, and then later thinking about how to carefully architect a product that meets that opportunity. This is [how products such as Facebook started|https://en.wikipedia.org/wiki/History_of_Facebook], with a poorly implemented prototype that revealed an opportunity, which was only later translated into a functional, reliable software service.
|
||||
|
||||
However, what if prototyping a beta _isn't_ cheap to build? What if your product only has one shot at adoption? What if you're building something for a client and they want to define success? Worse yet, what if your product could _kill_ people if it's not built properly? Consider the [U.S. HealthCare.gov launch|https://en.wikipedia.org/wiki/HealthCare.gov], for example, which was lambasted for its countless defects and poor scalability at launch, only working for 1,100 simultaneous users, when 50,000 were exected and 250,000 actually arrived. To prevent disastrous launches like this, software teams have to be more careful about translating a design specification into a specific explicit set of goals that must be satisfied in order for the implementation to be complete. We call these goals *requirements* and we call this process of *requirements engineering*<sommerville97>.
|
||||
However, what if prototyping a beta _isn't_ cheap to build? What if your product only has one shot at adoption? What if you're building something for a client and they want to define success? Worse yet, what if your product could _kill_ people if it's not built properly? Consider the [U.S. HealthCare.gov launch|https://en.wikipedia.org/wiki/HealthCare.gov], for example, which was lambasted for its countless defects and poor scalability at launch, only working for 1,100 simultaneous users, when 50,000 were expected and 250,000 actually arrived. To prevent disastrous launches like this, software teams have to be more careful about translating a design specification into a specific explicit set of goals that must be satisfied in order for the implementation to be complete. We call these goals *requirements* and we call this process *requirements engineering*<sommerville97>.
|
||||
|
||||
In principle, requirements are a relatively simple concept. They are simply statements of what must be true about a system to make the system acceptable. For example, suppose you were designing an interactive mobile game. You might want to write the requirement _The frame rate must never drop below 60 frames per second._ This could be important for any number of reasons: the game may rely on interactive speeds, your company's reputation may be for high fidelity graphics, or perhaps that high frame rate is key to creating a sense of realism. Or, imagine your game company has a reputation for high performance, high fidelity graphics, high frame rate graphics, and achieving any less would erode your company's brand. Whatever the reasons, expressing it as a requirement makes it explicit that any version of the software that doesn't meet that requirement is unacceptable, and sets a clear goal for engineering to meet.
|
||||
|
||||
The general idea of writing down requirements is actually a controversial one. Why not just discover what a system needs to do incrementally, through testing, user feedback, and other methods? Some of the original arguments for writing down requirements actually acknowledged that software is necessarily built incrementally, but that it is nevertheless useful to write down requirements from the outset<parnas86>. This is because requirements help you plan everything: what you have to build, what you have to test, and how to know when you're done. The theory is that by defining requirements explicitly, you plan, and by planning, you save time.
|
||||
|
||||
Do you really have to plan by _writing down_ requirements? For example, why not do what designers do, expressing requirements in the form of prototypes and mockups. These _implicitly_ state requirements, because they suggest what the software is supposed to do without saying it directly. But for some types of requirements, they actually imply nothing. For example, how responsive should a web page be to be? A prototype doesn't really say; an explicit requirement of _an average page load time of less than 1 second_ is quite explicit. Requirements can therefore be thought of more like an architect's blueprint: they provide explicit definitions and scaffolding of project success.
|
||||
Do you really have to plan by _writing down_ requirements? For example, why not do what designers do, expressing requirements in the form of prototypes and mockups. These _implicitly_ state requirements, because they suggest what the software is supposed to do without saying it directly. But for some types of requirements, they actually imply nothing. For example, how responsive should a web page be? A prototype doesn't really say; an explicit requirement of _an average page load time of less than 1 second_ is quite explicit. Requirements can therefore be thought of more like an architect's blueprint: they provide explicit definitions and scaffolding of project success.
|
||||
|
||||
And yet, like design, requirements come from the world and the people in it and not from software<jackson01>. Because they come from the world, requirements are rarely objective or unambiguous. For example, some requirements come from law, such as the European Union's General Data Protection Regulation [GDPR|https://eugdpr.org/] regulation, which specifies a set of data privacy requirements that all software systems used by EU citizens must meet. Other requirements might come from public pressure for change, as in Twitter's decision to label particular tweets as having false information or hate speech. Therefore, the methods that people use to do requirements engineering are quite diverse. Requirements engineers may work with lawyers to interpret policy. They might work with regulators to negotiate requirements. They might also use design methods, such as user research methods and rapid prototyping to iteratively converge toward requirements<lamsweerde08>. Therefore, the big difference between design and requirements engineering is that requirements engineers take the process one step further than designers, enumerating _in detail_ every property that the software must satisfy, and engaging with every source of requirements a system might need to meet, not just user needs.
|
||||
|
||||
|
|
Loading…
Reference in a new issue