Migrated to Peruse.

This commit is contained in:
Amy J. Ko 2020-09-08 14:07:08 -07:00
parent f056811235
commit e7267b9328
36 changed files with 835 additions and 2130 deletions

View file

@ -1,137 +1,6 @@
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Bootstrap requires jQuery -->
<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<!-- Load some Lora -->
<link href="https://fonts.googleapis.com/css2?family=Lora:ital,wght@0,400;0,700;1,400;1,700&display=swap" rel="stylesheet">
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<!-- Optional theme -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap-theme.min.css" integrity="sha384-rHyoN1iRsVXV4nD0JutlnGaslCJuC7uwjduW9SVrLvRYooPp2bWYgmgJQIXwl/Sp" crossorigin="anonymous">
<!-- Latest compiled and minified JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<link rel="stylesheet" href="style.css" />
<title>Architecture</title>
<meta http-equiv="refresh" content="0; URL=http://faculty.uw.edu/ajko/books/cooperative-software-development/#/architecture" />
</head>
<body>
<p><a href="index.html">Back to table of contents</a></p>
<img src="images/church.jpg" class="img-responsive" />
<small>Credit: Creative Commons 0</small>
<h1>Architecture</h1>
<div class="lead">Amy J. Ko</div>
<p>Once you have a sense of what your design must do (in the form of requirements or other less formal specifications), the next big problem is one of organization. How will you order all of the different data, algorithms, and control implied by your requirements? With a small program of a few hundred lines, you can get away without much organization, but as programs scale, they quickly become impossible to manage alone, let alone with multiple developers. Much of this challenge occurs because requirements <em>change</em>, and every time they do, code has to change to accommodate. The more code there is and the more entangled it is, the harder it is to change and more likely you are to break things.</p>
<p>This is where <b>architecture</b> comes in. Architecture is a way of organizing code, just like building architecture is a way of organizing space. The idea of software architecture has at its foundation a principle of <b>information hiding</b>: the less a part of a program knows about other parts of a program, the easier it is to change. The most popular information hiding strategy is <b>encapsulation</b>: this is the idea of designing self-contained abstractions with well-defined interfaces that separate different concerns in a program. Programming languages offer encapsulation support through things like <b>functions</b> and <b>classes</b>, which encapsulate data and functionality together. Another programming language encapsulation method is <b>scoping</b>, which hides variables and other names from other parts of program outside a scope. All of these strategies attempt to encourage developers to maximize information hiding and separation of concerns. If you get your encapsulation right, you should be able to easily make changes to a program's behavior without having to change <em>everything</em> about it's implementation.</p>
<p>When encapsulation strategies fail, one can end up with what some affectionately call a "ball of mud" architecture or "spaghetti code". Ball of mud architectures have no apparent organization, which makes it difficult to comprehend how parts of its implementation interact. A more precise concept that can help explain this disorder is <b>cross-cutting concerns</b>, which are things like features and functionality that span multiple different components of a system, or even an entire system. There is some evidence that cross-cutting concerns can lead to difficulties in program comprehension and long-term design degradation (<a href="#walker">Walker et al. 2012</a>), all of which reduce productivity and increase the risk of defects. As long-lived systems get harder to change, they can take on <em>technical debt</em>, which is the degree to which an implementation is out of sync with a team's understanding of what a product is intended to be. Many developers view such debt as emerging from primarily from poor architectural decisions (<a href="ernst">Ernst et al. 2015</a>). Over time, this debt can further result in organizational challenges (<a href="#khadka">Khadka et al. 2014</a>), making change even more difficult.</p>
<p>
The preventative solution to this problems is to try to design architecture up front, mitigating the various risks that come from cross-cutting concerns (defects, low modifiability, etc.) (<a href="#fairbanks">Fairbanks 2010</a>).
A popular method in the 1990's was the <a href="https://en.wikipedia.org/wiki/Unified_Modeling_Language">Unified Modeling Language</a> (UML), which was a series of notations for expressing the architectural design of a system before implementing it.
Recent studies show that UML generally not used and generally not universal (<a href="#petre">Petre 2013</a>).
While these formal representations have generally not been adopted, informal, natural language architectural specifications are still widely used.
For example, <a href="https://www.industrialempathy.com/posts/design-docs-at-google/">Google engineers write design specifications</a> to sort through ambiguities, consider alternatives, and clarify the volume of work required.
A study of developers' perceptions of the value of documentation also reinforced that many forms of documentation, including code comments, style guides, requirements specifications, installation guides, and API references, are viewed as critical, and are only viewed as less valuable because teams do not adequately maintain them (<a href="#aghajani">Aghajani et al. 2020</a>)
</p>
<p>
More recent developers have investigated ideas of <b>architectural styles</b>, which are patterns of interactions and information exchange between encapsulated components.
Some common architectural styles include:
</p>
<ul>
<li><strong>Client/server</strong>, in which data is transacted in response to requests. This is the basis of the Internet and cloud computing (<a href="#cito">Cito et la. 2015</a>).</li>
<li><strong>Pipe and filter</strong>, in which data is passed from component to component, and transformed and filtered along the way. Command lines, compilers, and machine learned programs are examples of pipe and filter architectures.</li>
<li><strong>Model-view-controller (MVC)</strong>, in which data is separated from views of the data and from manipulations of data. Nearly all user interface toolkits use MVC, including popular modern frameworks such as React.</li>
<li><strong>Peer to peer (P2P)</strong>, in which components transact data through a distributed standard interface. Examples include Bitcoin, Spotify, and Gnutella.</em>
<li><strong>Event-driven</strong>, in which some components "broadcast" events and others "subscribe" to notifications of these events. Examples include most model-view-controller-based user interface frameworks, which have models broadest change events to views, so they may update themselves to render new model state.</p>
</ul>
<p>Architectural styles come in all shapes and sizes. Some are smaller design patterns of information sharing (<a href="#beck">Beck et al. 2006</a>), whereas others are ubiquitous but specialized patterns such as the architectures required to support undo and cancel in user interfaces (<a href="#bass">Bass et al. 2004</a>).</p>
<p>One fundamental unit of which an architecture is composed is a <b>component</b>. This is basically a word that refers to any abstraction&mdash;any code, really&mdash;that attempts to <em>encapsulate</em> some well defined functionality or behavior separate from other functionality and behavior. For example, consider the Java class <em>Math</em>: it encapsulates a wide range of related mathematical functions. This class has an interface that decide how it can communicate with other components (sending arguments to a math function and getting a return value). Components can be more than classes though: they might be a data structure, a set of functions, a library, an API, or even something like a web service. All of these are abstractions that encapsulate interrelated computation and state for some well-define purpose.</p>
<p>The second fundamental unit of architecture is <b>connectors</b>. Connectors are code that transmit information <em>between</em> components. They're brokers that connect components, but do not necessarily have meaningful behaviors or states of their own. Connectors can be things like function calls, web service API calls, events, requests, and so on. None of these mechanisms store state or functionality themselves; instead, they are the things that tie components functionality and state together.</p>
<p>Even with carefully selected architectures, systems can still be difficult to put together, leading to <b>architectural mismatch</b> (<a href="#garlan">Garlan et al. 1995</a>). When mismatch occurs, connecting two styles can require dramatic amounts of code to connect, imposing significant risk of defects and cost of maintenance. One common example of mismatches occurs with the ubiquitous use of database schemas with client/server web-applications. A single change in a database schema can often result in dramatic changes in an application, as every line of code that uses that part of the scheme either directly or indirectly must be updated (<a href="#qiu">Qiu et al. 2013</a>). This kind of mismatch occurs because the component that manages data (the database) and the component that renders data (the user interface) is highly "coupled" with the database schema: the user interface needs to know <em>a lot</em> about the data, its meaning, and its structure in order to render it meaningfully.</p>
<p>
The most common approach to dealing with both architectural mismatch and the changing of requirements over time is <b>refactoring</b>, which means changing the <em>architecture</em> of an implementation without changing its behavior.
Refactoring is something most developers do as part of changing a system (<a href="#murphyhill">Murphy-Hill et al 2009</a>, <a href="#silva">Silva et al. 2016</a>).
Refactoring code to eliminate mismatch and technical debt can simplify change in the future, saving time (<a href="#ng">Ng et al. 2006</a>) and prevent future defects (<a href="#kim">Kim et al. 2012</a>).
However, because refactoring remains challenging, the difficulty of changing an architecture is often used as a rationale for rejecting demands for change from users.
For example, Google does not allow one to change their Gmail address, which greatly harms people who have changed their name (such as this author when she came out as a trans woman), forcing them to either live with an address that includes their old name, or abandon their Google account, with no ability to transfer documents or settings.
The rationale for this has nothing to do with policy and everything to do with the fact that the original architecture of Gmail treats the email address as a stable, unique identifier for an account.
Changing this basic assumption throughout Gmail's implementation would be an immense refactoring task.
</p>
<p>Research on the actual activity of software architecture is actually somewhat sparse. One of the more recent syntheses of this work is Petre et al.'s book, <em>Software Design Decoded</em> (<a href="#petre2">Petre et al. 2016</a>), which distills many of the practices and skills of software design into a set of succinct ideas. For example, the book states, "<em>Every design problem has multiple, if not infinite, ways of solving it. Experts strongly prefer simpler solutions over complex ones, for they know that such solutions are easier to understand and change in the future.</em>" And yet, in practice, studies of how projects use APIs often show that developers do the exact opposite, building projects with dependencies on large numbers of sometimes trivial APIs. Some behavior suggests that while software <em>architects</em> like simplicity of implementation, software <em>developers</em> are often choosing whatever is easiest to build, rather than whatever is least risky to maintain over time (<a href="#abdalkareem">Abdalkareem 2017</a>).</p>
<center class="lead"><a href="specifications.html">Next chapter: Specifications</a></center>
<h2>Further reading</h2>
<small>
<p id="abdalkareem">Rabe Abdalkareem, Olivier Nourry, Sultan Wehaibi, Suhaib Mujahid, and Emad Shihab. 2017. <a href="https://doi.org/10.1145/3106237.3106267">Why do developers use trivial packages? An empirical case study on npm</a>. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2017). ACM, New York, NY, USA, 385-395.</p>
<p id="aghajani">Emad Aghajani, Csaba Nagy, Mario Linares-Vásquez, Laura Moreno, Gabriele Bavota, Michele Lanza, David C. Shepherd. 2020. <a href="https://www.inf.usi.ch/phd/aghajani/resources/papers/agha2020a.pdf">Software Documentation: The Practitioners Perspective</a>. International Conference on Software Engineering.</p>
<p id="bass">Len Bass, Bonnie E. John. 2003. <a href="http://www.sciencedirect.com/science/article/pii/S0164121202000766" target="_blank">Linking usability to software architecture patterns through general scenarios</a>. Journal of Systems and Software, Volume 66, Issue 3, Pages 187-197.</p>
<p id="beck">Kent Beck, Ron Crocker, Gerard Meszaros, John Vlissides, James O. Coplien, Lutz Dominick, and Frances Paulisch. 1996. <a href="https://doi.org/10.1109/ICSE.1996.493406" target="_blank">Industrial experience with design patterns</a>. In Proceedings of the 18th international conference on Software engineering (ICSE '96). IEEE Computer Society, Washington, DC, USA, 103-114.</p>
<p id="cito">J&uumlrgen Cito, Philipp Leitner, Thomas Fritz, and Harald C. Gall. 2015. <a href="https://doi.org/10.1145/2786805.2786826" target="_blank">The making of cloud applications: an empirical study on software development for the cloud</a>. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). ACM, New York, NY, USA, 393-403.</p>
<p id="ernst">Neil A. Ernst, Stephany Bellomo, Ipek Ozkaya, Robert L. Nord, and Ian Gorton. 2015. <a href="https://doi.org/10.1145/2786805.2786848" target="_blank">Measure it? Manage it? Ignore it? Software practitioners and technical debt</a>. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). ACM, New York, NY, USA, 50-60.</p>
<p id="fairbanks">Fairbanks, G. (2010). <a href="https://www.amazon.com/Just-Enough-Software-Architecture-Risk-Driven/dp/0984618104" target="_blank">Just enough software architecture: a risk-driven approach</a>. Marshall & Brainerd.</p>
<p id="garlan">Garlan, D., Allen, R., & Ockerbloom, J. (1995). <a href="https://doi.org/10.1145/225014.225031" target="_blank">Architectural mismatch or why it's hard to build systems out of existing parts</a>. In Proceedings of the 17th international conference on Software engineering (pp. 179-185).</p>
<p id="khadka">Ravi Khadka, Belfrit V. Batlajery, Amir M. Saeidi, Slinger Jansen, and Jurriaan Hage. 2014. <a href="http://dx.doi.org/10.1145/2568225.2568318" target="_blank">How do professionals perceive legacy systems and software modernization?</a> In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 36-47.</p>
<p id="kim">Miryung Kim, Thomas Zimmermann, and Nachiappan Nagappan. 2012. <a href="http://dx.doi.org/10.1145/2393596.2393655" target="_blank">A field study of refactoring challenges and benefits</a>. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering (FSE '12). ACM, New York, NY, USA, , Article 50 , 11 pages.</p>
<p id="murphyhill">Emerson Murphy-Hill, Chris Parnin, and Andrew P. Black. 2009. <a href="http://dx.doi.org/10.1109/ICSE.2009.5070529" target="_blank">How we refactor, and how we know it</a>. In Proceedings of the 31st International Conference on Software Engineering (ICSE '09). IEEE Computer Society, Washington, DC, USA, 287-297.</p>
<p id="ng">T. H. Ng, S. C. Cheung, W. K. Chan, and Y. T. Yu. 2006. <a href="http://dx.doi.org/10.1145/1181775.1181778" target="_blank">Work experience versus refactoring to design patterns: a controlled experiment</a>. In Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering (SIGSOFT '06/FSE-14). ACM, New York, NY, USA, 12-22.</p>
<p id="petre">Marian Petre. 2013. <a href="ieeexplore.ieee.org/document/6606618/" target="_blank">UML in practice</a>. In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 722-731.</p>
<p id="petre2">Petre, M., van der Hoek, A., & Quach, Y. (2016). <a href="https://books.google.com/books?id=EVE4DQAAQBAJ&lpg=PT17&ots=Tk-8QiRQnP&dq=%22software%20design%20decoded%22&lr&pg=PT17#v=onepage&q&f=false" target="_blank">Software Design Decoded: 66 Ways Experts Think. MIT Press.</p>
<p id="silva">Danilo Silva, Nikolaos Tsantalis, and Marco Tulio Valente. 2016. <a href="https://doi.org/10.1145/2950290.2950305" target="_blank">Why we refactor? Confessions of GitHub contributors</a>. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2016). ACM, New York, NY, USA, 858-870.</p>
<p id="qiu">Dong Qiu, Bixin Li, and Zhendong Su. 2013. <a href="http://dx.doi.org/10.1145/2491411.2491431" target="_blank">An empirical analysis of the co-evolution of schema and code in database applications</a>. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2013). ACM, New York, NY, USA, 125-135.</p>
<p id="walker">Robert J. Walker, Shreya Rawal, and Jonathan Sillito. 2012. <a href="http://dx.doi.org/10.1145/2393596.2393654" target="_blank">Do crosscutting concerns cause modularity problems?</a> In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering (FSE '12). ACM, New York, NY, USA, , Article 49 , 11 pages.</p>
</small>
<h2>Podcasts</h2>
<small>
<p>Software Engineering Daily, <a href="https://softwareengineeringdaily.com/2015/07/27/react-js-with-sebastian-markbage-and-christopher-chedeau/">React JS with Sebastian Marbage and Christopher Chedeua</a></p>
</small>
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-10917999-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</body>
</html>

227
book.json Normal file
View file

@ -0,0 +1,227 @@
{
"title": "Cooperative Software Development",
"authors": ["Amy J. Ko"],
"contributors": ["Benjamin Xie"],
"license": "[Creative Commons Attribution-NoDeriviatives 4.0|https://creativecommons.org/licenses/by-nd/4.0/]",
"cover": ["cover.jpg", "A photograph of a racially and gender diverse team of six making decisions.", "Software engineering is inherently social.", "Amy J. Ko"],
"unknown": ["error.png", "A screen shot of a Linux operating system kernel panic", "Uh oh, something went wrong.", "William Pina"],
"description": "This book is an introduction to the many human, social, and political aspects of software engineering. It's unique in two ways. First, unlike many software engineering books, it explictly avoids centering technical questions about software engineering, instead focusing on the many ways that software engineering work is cognitive, social, and organizational. Second, it does so by engaging extensively with academic research literature, summarizing key findings, but also questioning them, opening a dialog about the nature of software engineering work and the many factors that shape it. Anyone that reads it will be better prepared to critically engage in creating software in teams.\n\nThis book is a living document. Do you see ways to improve it? [Submit an issue|https://github.com/amyjko/cooperative-software-development/issues] or a [pull request|https://github.com/amyjko/cooperative-software-development/pulls] to its [GitHub repository|https://github.com/amyjko/cooperative-software-development].\n\n_This material is based upon work supported by the National Science Foundation under Grant No. 0952733. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation._",
"chapters": [
["History", "history", "Hamilton.jpg", "Margaret Hamilton working on the Apollo flight software.", "Margaret Hamilton working on the Apollo flight software.", "NASA"],
["Organizations", "organizations", "team.jpg", "A software team hard at work", "Early days at the author's software startup in 2012.", "Amy J. Ko"],
["Communication", "communication", "communication.png", "A man and a woman having a conversation", "Clear and timely communication is at the heart of effective software engineering.", "Public domain"],
["Productivity", "productivity", "productivity.jpg", "A women working at a laptop", "Productivity isn't just about working fast.", "Creative Commons CC0"],
["Quality", "quality", "zoho.jpg", "A screenshot of the Zoho issue tracker.", "Software quality is multidimensional and often coursely measured through issue trackers like this one.", "Zoho, Inc."],
["Requirements", "requirements", "scaffolding.jpg", "An architectural structure showing the framework of a glass structure", "Requirements specify what software must do, constraining, focusing, and defining it's successful functioning.", "Public domain"],
["Architecture", "architecture", "church.jpg", "A photograph of a church hallway with arches.", "Architecture is how code is organized", "Creative Commons 0"],
["Specifications", "specifications", "blueprint.jpg", "A blueprint for an architectural plan", "Specifications add a layer of detail onto architectural plans.", "Public domain"],
["Process", "process", "flow.jpg", "A photograph of a river", "Good process is like a river, seamlessly flowing around obstacles", "Public domain"],
["Comprehension", "comprehension", "network.png", "A visualization of many interconnected node", "Program comprehension is about understanding dependencies", "Public domain"],
["Verification", "verification", "check.png", "A check mark", "Have you met your requirements? How do you know?", "Public domain"],
["Monitoring", "monitoring", "monitoring.jpg", "A photograph of a lifeguide monitoring a beach.", "It's not always easy to see software fail.", "Public domain"],
["Debugging", "debugging", "swatter.png", "An illustration of a fly swatter.", "Debugging is inevitable because defects are inevitable", "Public domain"]
],
"revisions": [
["September 2020", "Migrated to [Peruse|https://github.com/amyjko/peruse]."],
["July 2020", "Revised all chapters to address racism, sexism, and ableism in software engineering."],
["July 2019", "Incorporated newly published work from ICSE, ESEC/FSE, SIGCSE, TSE, and TOSEM."],
["July 2018", "Incorporated newly published work from ICSE, ESEC/FSE, SIGCSE, TSE, and TOSEM."],
["July 2017", "First draft of the book release."]
],
"references": {
"abbate12": "Abbate, Janet (2012). [Recoding Gender: Women's Changing Participation in Computing|https://mitpress.mit.edu/books/recoding-gender]. The MIT Press.",
"abdalkareem17": "Rabe Abdalkareem, Olivier Nourry, Sultan Wehaibi, Suhaib Mujahid, and Emad Shihab. 2017. [Why do developers use trivial packages? An empirical case study on npm|https://doi.org/10.1145/3106237.3106267]. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2017). ACM, New York, NY, USA, 385-395.",
"aghajani20": "Emad Aghajani, Csaba Nagy, Mario Linares-Vásquez, Laura Moreno, Gabriele Bavota, Michele Lanza, David C. Shepherd. 2020. [Software Documentation: The Practitioners Perspective|https://www.inf.usi.ch/phd/aghajani/resources/papers/agha2020a.pdf]. International Conference on Software Engineering.",
"ahmed16": "Iftekhar Ahmed, Rahul Gopinath, Caius Brindescu, Alex Groce, and Carlos Jensen. 2016. [Can testedness be effectively measured?|https://doi.org/10.1145/2950290.2950324] In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2016). ACM, New York, NY, USA, 547-558.",
"akers09": "David Akers, Matthew Simpson, Robin Jeffries, and Terry Winograd. 2009. [Undo and erase events as indicators of usability problems|http://dx.doi.org/10.1145/1518701.1518804]. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '09). ACM, New York, NY, USA, 659-668.",
"albaik15": "Al-Baik, O., & Miller, J. (2015). [The kanban approach, between agility and leanness: a systematic review|https://doi.org/10.1007/s10664-014-9340-x]. Empirical Software Engineering, 20(6), 1861-1897.",
"aranda09": "Jorge Aranda and Gina Venolia. 2009. [The secret life of bugs: Going past the errors and omissions in software repositories|http://dx.doi.org/10.1109/ICSE.2009.5070530]. In Proceedings of the 31st International Conference on Software Engineering (ICSE '09). IEEE Computer Society, Washington, DC, USA, 298-308.",
"atwood16": "Software Engineering Daily. [The State of Programming with Stack Overflow Co-Founder Jeff Atwood|https://softwareengineeringdaily.com/2016/03/14/state-programming-jeff-atwood/].",
"bacchelli13": "Alberto Bacchelli and Christian Bird. 2013. [Expectations, outcomes, and challenges of modern code review|https://doi.org/10.1109/ICSE.2013.6606617]. In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 712-721.",
"baecker88": "R. Baecker. 1988. [Enhancing program readability and comprehensibility with tools for program visualization|https://doi.org/10.1109/ICSE.1988.93716]. In Proceedings of the 10th international conference on Software engineering (ICSE '88). IEEE Computer Society Press, Los Alamitos, CA, USA, 356-366.",
"baltes18": "Baltes, S., & Diehl, S. (2018). [Towards a theory of software development expertise|https://doi.org/10.1145/3236024.3236061]. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (pp. 187-200). ACM.",
"bartram16": "Software Engineering Daily (2016). [Hiring Engineers with Ammon Bartram|https://softwareengineeringdaily.com/2015/12/23/hiring-engineers-with-ammon-bartram/].",
"barua14": "Barua, A., Thomas, S. W., & Hassan, A. E. (2014). [What are developers talking about? an analysis of topics and trends in Stack Overflow|http://link.springer.com/article/10.1007/s10664-012-9231-y]. Empirical Software Engineering, 19(3), 619-654.",
"bass03": "Len Bass, Bonnie E. John. 2003. [Linking usability to software architecture patterns through general scenarios|http://www.sciencedirect.com/science/article/pii/S0164121202000766]. Journal of Systems and Software, Volume 66, Issue 3, Pages 187-197.",
"beck96": "Kent Beck, Ron Crocker, Gerard Meszaros, John Vlissides, James O. Coplien, Lutz Dominick, and Frances Paulisch. 1996. [Industrial experience with design patterns|https://doi.org/10.1109/ICSE.1996.493406]. In Proceedings of the 18th international conference on Software engineering (ICSE '96). IEEE Computer Society, Washington, DC, USA, 103-114.",
"beck99": "Beck, K. (1999). [Embracing change with extreme programming|https://doi.org/10.1007/s10664-014-9340-x]. Computer, 32(10), 70-77.",
"begel08": "Begel, A., & Simon, B. (2008). [Novice software developers, all over again|http://dl.acm.org/citation.cfm?id=1404522]. In Proceedings of the Fourth international Workshop on Computing Education Research (pp. 3-14). ACM.",
"begel10": "Andrew Begel, Yit Phang Khoo, and Thomas Zimmermann (2010). [Codebook: discovering and exploiting relationships in software repositories|http://dx.doi.org/10.1145/1806799.1806821]. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1 (ICSE '10), Vol. 1. ACM, New York, NY, USA, 125-134.",
"begel14": "Begel, A., & Zimmermann, T. (2014). [Analyze this! 145 questions for data scientists in software engineering|https://doi.org/10.1145/2568225.2568233]. In Proceedings of the 36th International Conference on Software Engineering (pp. 12-23).",
"beller15": "Moritz Beller, Georgios Gousios, Annibale Panichella, and Andy Zaidman. 2015. [When, how, and why developers (do not) test in their IDEs|https://doi.org/10.1145/2786805.2786843]. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). ACM, New York, NY, USA, 179-190.",
"beller18": "Beller, M., Spruit, N., Spinellis, D., & Zaidman, A. (2018). [On the dichotomy of debugging behavior among programmers|https://doi.org/10.1145/3180155.3180175]. In 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE) (pp. 572-583).",
"bendifallah89": "Salah Bendifallah and Walt Scacchi. 1989. [Work structures and shifts: an empirical analysis of software specification teamwork|http://dx.doi.org/10.1145/74587.74624]. In Proceedings of the 11th international conference on Software engineering (ICSE '89). ACM, New York, NY, USA, 260-270.",
"benjamin19": "Benjamin, R. (2019). [Race after Technology: Abolitionist Tools for the New Jim Code|https://www.ruhabenjamin.com/race-after-technology]. Polity Books.",
"bertram10": "Dane Bertram, Amy Voida, Saul Greenberg, and Robert Walker. 2010. [Communication, collaboration, and bugs: the social nature of issue tracking in small, collocated teams|http://dx.doi.org/10.1145/1718918.1718972]. In Proceedings of the 2010 ACM conference on Computer supported cooperative work (CSCW '10). ACM, New York, NY, USA, 291-300.",
"bettenburg08": " Nicolas Bettenburg, Sascha Just, Adrian Schr&oumlter, Cathrin Weiss, Rahul Premraj, and Thomas Zimmermann. 2008. [What makes a good bug report?|http://dx.doi.org/10.1145/1453101.1453146] In Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering (SIGSOFT '08/FSE-16). ACM, New York, NY, USA, 308-318.",
"bettenburg13": "Bettenburg, N., & Hassan, A. E. (2013). [Studying the impact of social interactions on software quality|https://doi.org/10.1007/s10664-012-9205-0]. Empirical Software Engineering, 18(2), 375-431.",
"bhattacharya11": "Pamela Bhattacharya and Iulian Neamtiu. 2011. [Assessing programming language impact on development and maintenance: a study on C and C++|https://doi.org/10.1145/1985793.1985817]. In Proceedings of the 33rd International Conference on Software Engineering (ICSE '11). ACM, New York, NY, USA, 171-180.",
"binkley13": "Binkley, D., Davis, M., Lawrie, D., Maletic, J. I., Morrell, C., & Sharif, B. (2013). [The impact of identifier style on effort and comprehension|https://link.springer.com/article/10.1007/s10664-012-9201-4]. Empirical Software Engineering, 18(2), 219-276.",
"bird11": "Christian Bird, Nachiappan Nagappan, Brendan Murphy, Harald Gall, and Premkumar Devanbu. 2011. [Don't touch my code! Examining the effects of ownership on software quality|http://dx.doi.org/10.1145/2025113.2025119]. In Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering (ESEC/FSE '11). ACM, New York, NY, USA, 4-14.",
"boehm76": "[Software Engineering|http://ieeexplore.ieee.org/document/1674590/], IEEE Transactions on Computers, 25(12), 1226-1241.",
"boehm88": "Boehm, B. W. (1988). [A spiral model of software development and enhancement|http://ieeexplore.ieee.org/abstract/document/59/]. Computer, 21(5), 61-72.",
"boehm91": "Boehm, B. W. (1991). [Software risk management: principles and practices|http://ieeexplore.ieee.org/abstract/document/62930]. IEEE software, 8(1), 32-41.",
"borozdin17": "[Engineering Management with Mike Borozdin|https://softwareengineeringdaily.com/2017/02/08/engineering-management-with-mike-borozdin/] (2017). Software Engineering Daily.",
"brooks95": "Brooks, F.P. (1995). [The Mythical Man Month|https://books.google.com/books?id=Yq35BY5Fk3gC].",
"callaú13": "Callaú, O., Robbes, R., Tanter, É., & Röthlisberger, D. (2013). [How (and why) developers use the dynamic features of programming languages: the case of Smalltalk|https://doi.org/10.1145/1985441.1985448]. Empirical Software Engineering, 18(6), 1156-1194.",
"casalnuovo15": "Casey Casalnuovo, Bogdan Vasilescu, Premkumar Devanbu, and Vladimir Filkov. 2015. [Developer onboarding in GitHub: the role of prior social links and language experience|https://doi.org/10.1145/2786805.2786854]. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). ACM, New York, NY, USA, 817-828.",
"casalnuovo15-2": "Casey Casalnuovo, Prem Devanbu, Abilio Oliveira, Vladimir Filkov, and Baishakhi Ray. 2015. [Assert use in GitHub projects|https://doi.org/10.1145/2568225.2568285]. In Proceedings of the 37th International Conference on Software Engineering - Volume 1 (ICSE '15), Vol. 1. IEEE Press, Piscataway, NJ, USA, 755-766.",
"chedeua15": "[React JS with Sebastian Marbage and Christopher Chedeua|https://softwareengineeringdaily.com/2015/07/27/react-js-with-sebastian-markbage-and-christopher-chedeau/], Software Engineering Daily",
"chen09": "Chen, Chien-Tsun, Yu Chin Cheng, Chin-Yun Hsieh, and I-Lang Wu. [Exception handling refactorings: Directed by goals and driven by bug fixing|https://doi.org/10.1016/j.jss.2008.06.035]. Journal of Systems and Software 82, no. 2 (2009): 333-345.",
"chilana11": "Parmit K. Chilana, Amy J. Ko, Jacob O. Wobbrock, Tovi Grossman, and George Fitzmaurice. 2011. [Post-deployment usability: a survey of current practices|http://dx.doi.org/10.1145/1978942.1979270]. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 2243-2246.",
"chilana13": "Chilana, P. K., Ko, A. J., Wobbrock, J. O., & Grossman, T. (2013). [A multi-site field study of crowdsourced contextual help: usage and perspectives of end users and software teams|http://dl.acm.org/citation.cfm?id=2470685]. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 217-226).",
"chong07": "Jan Chong and Tom Hurlbutt. 2007. [The Social Dynamics of Pair Programming|http://dx.doi.org/10.1109/ICSE.2007.87]. In Proceedings of the 29th international conference on Software Engineering (ICSE '07). IEEE Computer Society, Washington, DC, USA, 354-363.",
"cito15": "Jürgen Cito, Philipp Leitner, Thomas Fritz, and Harald C. Gall. 2015. [The making of cloud applications: an empirical study on software development for the cloud|https://doi.org/10.1145/2786805.2786826]. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). ACM, New York, NY, USA, 393-403.",
"clarke06": "Clarke, L. A., & Rosenblum, D. S. (2006). [A historical perspective on runtime assertion checking in software development|https://doi.org/10.1145/1127878.1127900]. ACM SIGSOFT Software Engineering Notes, 31(3), 25-37.",
"clegg08": "Clegg, S. and Bailey, J.R. (2008). [International Encyclopedia of Organization Studies|https://books.google.com/books?id=Uac5DQAAQBAJ]. Sage Publications.",
"coelho17": "Jailton Coelho and Marco Tulio Valente (2017). [Why modern open source projects fail|https://doi.org/10.1145/3106237.3106246]. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2017).",
"conway68": "Conway, M. E. (1968). [How do committees invent|https://pdfs.semanticscholar.org/cbce/35eedcde3ef152bde75950fbc7ef4c6717b2.pdf]. Datamation, 14(4), 28-31.",
"dagenais10": "Barthélémy Dagenais, Harold Ossher, Rachel K. E. Bellamy, Martin P. Robillard, and Jacqueline P. de Vries. 2010. [Moving into a new software project landscape|http://dx.doi.org/10.1145/1806799.1806842]. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1 (ICSE '10), Vol. 1. ACM, New York, NY, USA, 275-284.",
"demarco85": "Tom DeMarco and Tim Lister. 1985. [Programmer performance and the effects of the workplace|http://dl.acm.org/citation.cfm?id=319651]. In Proceedings of the 8th international conference on Software engineering (ICSE '85). IEEE Computer Society Press, Los Alamitos, CA, USA, 268-272.",
"demarco87": "DeMarco, T. and Lister, T. (1987). [Peopleware: Productive Projects and Teams|https://books.google.com/books?id=TVQUAAAAQBAJ].",
"dibella13": "di Bella, E., Fronza, I., Phaphoom, N., Sillitti, A., Succi, G., & Vlasenko, J. (2013). [Pair Programming and Software Defects--A Large, Industrial Case Study|https://doi.org/10.1109/TSE.2012.68]. IEEE Transactions on Software Engineering, 39(7), 930-953.",
"dingsøyr03": "Torgeir Dingsøyr, Emil Røyrvik. 2003. [An empirical study of an informal knowledge repository in a medium-sized software consulting company|http://dl.acm.org/citation.cfm?id=776827]. In Proceedings of the 25th International Conference on Software Engineering (ICSE '03). IEEE Computer Society, Washington, DC, USA, 84-92.",
"duala12": "Ekwa Duala-Ekoko and Martin P. Robillard. 2012. [Asking and answering questions about unfamiliar APIs: an exploratory study|http://dl.acm.org/citation.cfm?id=2337255]. In Proceedings of the 34th International Conference on Software Engineering (ICSE '12). IEEE Press, Piscataway, NJ, USA, 266-276.",
"dybå02": "Dyb&#551;, T. (2002). [Enabling software process improvement: an investigation of the importance of organizational issues|https://doi.org/10.1145/940071.940092]. Empirical Software Engineering, 7(4), 387-390.",
"dybå03": "Tore Dybå. 2003. [Factors of software process improvement success in small and large organizations: an empirical study in the scandinavian context|http://dx.doi.org/10.1145/940071.940092]. In Proceedings of the 9th European software engineering conference held jointly with 11th ACM SIGSOFT international symposium on Foundations of software engineering (ESEC/FSE-11). ACM, New York, NY, USA, 148-157.",
"ebert15": "Ebert, F., Castor, F., and Serebrenik, A. (2015). [An exploratory study on exception handling bugs in Java programs|https://doi.org/10.1016/j.jss.2015.04.066]. Journal of Systems and Software, 106, 82-101.",
"endrikat14": "Stefan Endrikat, Stefan Hanenberg, Romain Robbes, and Andreas Stefik. 2014. [How do API documentation and static typing affect API usability?|https://doi.org/10.1145/2568225.2568299] In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 632-642.",
"ernst15": "Neil A. Ernst, Stephany Bellomo, Ipek Ozkaya, Robert L. Nord, and Ian Gorton. 2015. [Measure it? Manage it? Ignore it? Software practitioners and technical debt|https://doi.org/10.1145/2786805.2786848]. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). ACM, New York, NY, USA, 50-60.",
"fairbanks10": "Fairbanks, G. (2010). [Just enough software architecture: a risk-driven approach|https://www.amazon.com/Just-Enough-Software-Architecture-Risk-Driven/dp/0984618104]. Marshall & Brainerd.",
"fleming13": "Fleming, S. D., Scaffidi, C., Piorkowski, D., Burnett, M., Bellamy, R., Lawrance, J., & Kwan, I. (2013). [An information foraging theory perspective on tools for debugging, refactoring, and reuse tasks|https://doi.org/10.1145/2430545.2430551]. ACM Transactions on Software Engineering and Methodology (TOSEM), 22(2), 14.",
"foucault15": "Matthieu Foucault, Marc Palyart, Xavier Blanc, Gail C. Murphy, and Jean-Rémy Falleri. 2015. [Impact of developer turnover on quality in open-source software|https://doi.org/10.1145/2786805.2786870]. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). ACM, New York, NY, USA, 829-841.",
"fucci16": "Fucci, D., Erdogmus, H., Turhan, B., Oivo, M., & Juristo, N. (2016). [A Dissection of Test-Driven Development: Does It Really Matter to Test-First or to Test-Last?|https://doi.org/10.1109/TSE.2016.2616877]. IEEE Transactions on Software Engineering.",
"garlan95": "Garlan, D., Allen, R., & Ockerbloom, J. (1995). [Architectural mismatch or why it's hard to build systems out of existing parts|https://doi.org/10.1145/225014.225031]. In Proceedings of the 17th international conference on Software engineering (pp. 179-185).",
"gilmore91": "Gilmore, D. (1991). [Models of debugging|https://doi.org/10.1016/0001-6918(91)90009-O]. Acta Psychologica, 78, 151-172.",
"gleick11": "Gleick, James (2011). [The Information: A History, A Theory, A Flood|https://books.google.com/books?id=617JSFW0D2kC]. Pantheon Books.",
"glerum09": "Kirk Glerum, Kinshuman Kinshumann, Steve Greenberg, Gabriel Aul, Vince Orgovan, Greg Nichols, David Grant, Gretchen Loihle, and Galen Hunt. 2009. [Debugging in the (very) large: ten years of implementation and experience|http://dx.doi.org/10.1145/1629575.1629586]. In Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles (SOSP '09). ACM, New York, NY, USA, 103-116.",
"grabner16": "[Performance Monitoring with Andi Grabner|https://softwareengineeringdaily.com/2016/12/27/performance-monitoring-with-andi-grabner/]. Software Engineering Daily.",
"green89": "Green, T. R. (1989). [Cognitive dimensions of notations|https://www.cl.cam.ac.uk/~afb21/CognitiveDimensions/papers/Green1989.pdf]. People and computers V, 443-460.",
"grossman09": "Grossman, T., Fitzmaurice, G., & Attar, R. (2009). [A survey of software learnability: metrics, methodologies and guidelines|https://doi.org/10.1145/1518701.1518803]. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 649-658).",
"grudin17": "Grudin, Jonathan (2017). [From Tool to Partner: The Evolution of Human-Computer Interaction|https://books.google.com/books?id=Wc3hDQAAQBAJ].",
"hanenberg13": "Stefan Hanenberg, Sebastian Kleinschmager, Romain Robbes, Éric Tanter, Andreas Stefik. [An empirical study on the impact of static typing on software maintainability|https://doi.org/10.1007/s10664-013-9289-1]. Empirical Software Engineering. 2013.",
"herbsleb03": "James D. Herbsleb and Audris Mockus. 2003. [Formulation and preliminary test of an empirical theory of coordination in software engineering|http://dx.doi.org/10.1145/940071.940091]. In Proceedings of the 9th European software engineering conference held jointly with 11th ACM SIGSOFT international symposium on Foundations of software engineering (ESEC/FSE-11). ACM, New York, NY, USA, 138-137.",
"herbsleb16": "James Herbsleb. 2016. [Building a socio-technical theory of coordination: why and how|https://doi.org/10.1145/2950290.2994160]. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2016). ACM, New York, NY, USA, 2-10.",
"hoda10": "Rashina Hoda, James Noble, and Stuart Marshall. 2010. [Organizing self-organizing teams|https://doi.org/10.1145/1806799.1806843]. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1 (ICSE '10), Vol. 1. ACM, New York, NY, USA, 285-294.",
"hoda17": "Hoda, R., & Noble, J. (2017). [Becoming agile: a grounded theory of agile transitions in practice|https://doi.org/10.1109/ICSE.2017.21]. In Proceedings of the 39th International Conference on Software Engineering (pp. 141-151). IEEE Press.",
"ivory01": "Ivory M.Y., Hearst, M.A. (2001). [The state of the art in automating usability evaluation of user interfaces|http://doi.acm.org/10.1145/503112.503114]. ACM Computing Surveys, 33(4).",
"jackson01": "Jackson, Michael (2001). [Problem Frames|https://books.google.com/books?id=8fqIP83Q2IAC]. Addison-Wesley.",
"johnson13": "Brittany Johnson, Yoonki Song, Emerson Murphy-Hill, and Robert Bowdidge. 2013. [Why don't software developers use static analysis tools to find bugs?|http://ieeexplore.ieee.org/abstract/document/6606613] In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 672-681.",
"johnson15": "Brittany Johnson, Rahul Pandita, Emerson Murphy-Hill, and Sarah Heckman. 2015. [Bespoke tools: adapted to the concepts developers know|https://doi.org/10.1145/2786805.2803197]. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). ACM, New York, NY, USA, 878-881.",
"kalliamvakou17": "Kalliamvakou, E., Bird, C., Zimmermann, T., Begel, A., DeLine, R., & German, D. M. (2017). [What makes a great manager of software engineers?|https://ieeexplore.ieee.org/abstract/document/8094304/] IEEE Transactions on Software Engineering.",
"kalliamvakou20": "Kalliamvakou, E., Bird, C., Zimmermann, T., Begel, A., DeLine, R., German, D. M. [What Makes a Great Manager of Software Engineers?|https://doi.org/10.1109/TSE.2017.2768368]] IEEE Transactions on Software Engineering, 45, 1, 87-106.",
"kay96": "Kay, A. C. (1996). [The early history of Smalltalk|http://dl.acm.org/citation.cfm?id=1057828]. History of programming languages---II (pp. 511-598).",
"kenney00": "Kenney, M. (2000). Understanding Silicon Valley: The anatomy of an entrepreneurial region. Stanford University Press.",
"kernighan16": "[Language Design with Brian Kernighan|https://softwareengineeringdaily.com/2016/01/06/language-design-with-brian-kernighan/], Software Engineering Daily.",
"kersten06": "Mik Kersten and Gail C. Murphy. 2006. [Using task context to improve programmer productivity|http://dx.doi.org/10.1145/1181775.1181777]. In Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering (SIGSOFT '06/FSE-14). ACM, New York, NY, USA, 1-11.",
"khadka14": "Ravi Khadka, Belfrit V. Batlajery, Amir M. Saeidi, Slinger Jansen, and Jurriaan Hage. 2014. [How do professionals perceive legacy systems and software modernization?|http://dx.doi.org/10.1145/2568225.2568318]. In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 36-47.",
"kim12": "Miryung Kim, Thomas Zimmermann, and Nachiappan Nagappan. 2012. [A field study of refactoring challenges and benefits|http://dx.doi.org/10.1145/2393596.2393655]. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering (FSE '12). ACM, New York, NY, USA, , Article 50 , 11 pages.",
"kim16": "Miryung Kim, Thomas Zimmermann, Robert DeLine, and Andrew Begel. 2016. [The emerging role of data scientists on software development teams|https://doi.org/10.1145/2884781.2884783]. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 96-107.",
"ko04": "Ko, A. J., Myers, B. A., & Aung, H. H. (2004, September). [Six learning barriers in end-user programming systems|http://ieeexplore.ieee.org/abstract/document/1372321/]. In Visual Languages and Human Centric Computing, 2004 IEEE Symposium on (pp. 199-206). IEEE.",
"ko05": "Amy J. Ko, Htet Htet Aung, Brad A. Myers (2005). Eliciting Design Requirements for Maintenance-Oriented IDEs: A Detailed Study of Corrective and Perfective Maintenance Tasks. International Conference on Software Engineering (ICSE), 126-135.",
"ko07": "Amy J. Ko, Rob DeLine, and Gina Venolia (2007). [Information needs in collocated software development teams|https://doi.org/10.1109/ICSE.2007.45]. In 29th International Conference on Software Engineering, 344-353.",
"ko08": "Amy J. Ko and Brad A. Myers. 2008. [Debugging reinvented: asking and answering why and why not questions about program behavior|http://dx.doi.org/10.1145/1368088.1368130]. In Proceedings of the 30th international conference on Software engineering (ICSE '08). ACM, New York, NY, USA, 301-310.",
"ko09": "Amy J. Ko and Brad A. Myers (2009). [Finding causes of program output with the Java Whyline|https://doi.org/10.1145/1518701.1518942]. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1569-1578).",
"ko17": "Ko, Amy J. (2017). [A Three-Year Participant Observation of Software Startup Software Evolution|https://faculty.washington.edu/ajko/papers/Ko2017AnswerDashReflection.pdf]. International Conference on Software Engineering, Software Engineering in Practice, 3-12.",
"kocaguneli13": "Ekrem Kocaguneli, Thomas Zimmermann, Christian Bird, Nachiappan Nagappan, and Tim Menzies. 2013. [Distributed development considered harmful?|https://doi.org/10.1109/ICSE.2013.6606637]. In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 882-890.",
"koide05": "Amy J. Ko, Htet Aung, and Brad A. Myers. 2005. [Eliciting design requirements for maintenance-oriented IDEs: a detailed study of corrective and perfective maintenance tasks|http://ieeexplore.ieee.org/abstract/document/1553555/]. In Proceedings of the 27th international conference on Software engineering (ICSE '05). ACM, New York, NY, USA, 126-135.",
"kononenko16": "Oleksii Kononenko, Olga Baysal, and Michael W. Godfrey. 2016. [Code review quality: how developers see it|https://doi.org/10.1145/2884781.2884840]. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 1028-1038.",
"lamsweerde08": "Axel van Lamsweerde. (2008). [Requirements engineering: from craft to discipline|http://dx.doi.org/10.1145/1453101.1453133]. In Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering (SIGSOFT '08/FSE-16). ACM, New York, NY, USA, 238-249.",
"latoza06": "Thomas D. LaToza, Gina Venolia, and Robert DeLine. 2006. [Maintaining mental models: a study of developer work habits|http://dx.doi.org/10.1145/1134285.1134355]. In Proceedings of the 28th international conference on Software engineering (ICSE '06). ACM, New York, NY, USA, 492-501.",
"latoza07": "Thomas D. LaToza, David Garlan, James D. Herbsleb, and Brad A. Myers. 2007. [Program comprehension as fact finding|http://dx.doi.org/10.1145/1287624.1287675]. In Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering (ESEC-FSE '07). ACM, New York, NY, USA, 361-370.",
"latoza10": "Thomas D. LaToza and Brad A. Myers. 2010. [Developers ask reachability questions|http://dx.doi.org/10.1145/1806799.1806829]. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1 (ICSE '10), Vol. 1. ACM, New York, NY, USA, 185-194.",
"lavallee15": "Mathieu Lavallee and Pierre N. Robillard. 2015. [Why good developers write bad code: an observational case study of the impacts of organizational factors on software quality|http://dl.acm.org/citation.cfm?id=2818754.2818837]. In Proceedings of the 37th International Conference on Software Engineering - Volume 1 (ICSE '15), Vol. 1. IEEE Press, Piscataway, NJ, USA, 677-687.",
"lawrie06": "Lawrie, D., Morrell, C., Feild, H., & Binkley, D. (2006). [What's in a name? A study of identifiers|https://doi.org/10.1109/ICPC.2006.51]. IEEE International Conference on Program Comprehension, 3-12.",
"lee03": "Lee, G. K., & Cole, R. E. (2003). [From a firm-based to a community-based model of knowledge creation: The case of the Linux kernel development|http://pubsonline.informs.org/doi/abs/10.1287/orsc.14.6.633.24866]. Organization science, 14(6), 633-649.",
"li15": "Paul Luo Li, Amy J. Ko, and Jiamin Zhu. 2015. [What makes a great software engineer?|http://dl.acm.org/citation.cfm?id=2818839]. In Proceedings of the 37th International Conference on Software Engineering - Volume 1 (ICSE '15), Vol. 1. IEEE Press, Piscataway, NJ, USA, 700-710.",
"li17": "Li, P. L., Ko, A. J., & Begel, A. (2017). [Cross-disciplinary perspectives on collaborations with software engineers|http://dl.acm.org/citation.cfm?id=3100319]. In Proceedings of the 10th International Workshop on Cooperative and Human Aspects of Software Engineering (pp. 2-8).",
"maalej14": "Walid Maalej, Rebecca Tiarks, Tobias Roehm, and Rainer Koschke. 2014. [On the Comprehension of program Comprehension|http://dx.doi.org/10.1145/2622669]. ACM Transactions on Software Engineering and Methodology. 23, 4, Article 31 (September 2014), 37 pages.",
"mader15": "Mäder, P., & Egyed, A. (2015). [Do developers benefit from requirements traceability when evolving and maintaining a software system?|https://doi.org/10.1007/s10664-014-9314-z]. Empirical Software Engineering, 20(2), 413-441.",
"mamykina11": "Mamykina, L., Manoim, B., Mittal, M., Hripcsak, G., & Hartmann, B. (2011). [Design lessons from the fastest Q&A site in the west|https://doi.org/10.1145/1978942.1979366]. In Proceedings of the SIGCHI conference on Human factors in computing systems, 2857-2866.",
"mark08": "Mark, G., Gudith, D., & Klocke, U. (2008). [The cost of interrupted work: more speed and stress|http://dl.acm.org/citation.cfm?id=1357072]. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems (pp. 107-110).",
"maxion00": "Maxion, Roy A., and Robert T. Olszewski. [Eliminating exception handling errors with dependability cases: a comparative, empirical study|https://doi.org/10.1109/32.877848]. IEEE Transactions on Software Engineering 26, no. 9 (2000): 888-906.",
"may19": "May, A., Wachs, J., & Hannák, A. (2019). [Gender differences in participation and reward on Stack Overflow|https://link.springer.com/article/10.1007/s10664-019-09685-x]. Empirical Software Engineering, 1-23.",
"mccarthy78": "McCarthy, J. (1978). [History of LISP|http://dl.acm.org/citation.cfm?id=1198360]. In History of programming languages I (pp. 173-185).",
"mcilwain19": "McIlwain, C. D. (2019). [Black Software: The Internet and Racial Justice, from the AfroNet to Black Lives Matter|https://global.oup.com/academic/product/black-software-9780190863845?cc=us&lang=en&]. Oxford University Press.",
"meneely11": "Andrew Meneely, Pete Rotella, and Laurie Williams. 2011. [Does adding manpower also affect quality? An empirical, longitudinal analysis|http://dx.doi.org/10.1145/2025113.2025128]. In Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering (ESEC/FSE '11). ACM, New York, NY, USA, 81-90.",
"menzies13": "Menzies, T., & Zimmermann, T. (2013). [Software analytics: so what?|https://doi.ieeecomputersociety.org/10.1109/MS.2013.86] IEEE Software, 30(4), 31-37.",
"metcalf02": "Metcalf, M. (2002). [History of Fortran|http://dl.acm.org/citation.cfm?id=602379]. In ACM SIGPLAN Fortran Forum (Vol. 21, No. 3, pp. 19-20)",
"meyer17": "Meyer, A. N., Barton, L. E., Murphy, G. C., Zimmermann, T., & Fritz, T. (2017). [The work life of developers: Activities, switches and perceived productivity||https://doi.org/10.1109/TSE.2017.2656886]. IEEE Transactions on Software Engineering, 43(12), 1178-1193.",
"milewski07": "Milewski, A. E. (2007). [Global and task effects in information-seeking among software engineers|https://link.springer.com/article/10.1007/s10664-007-9036-6]. Empirical Software Engineering, 12(3), 311-326.",
"mockus02": "Audris Mockus and James D. Herbsleb. 2002. [Expertise browser: a quantitative approach to identifying expertise|http://dx.doi.org/10.1145/581339.581401]. In Proceedings of the 24th International Conference on Software Engineering (ICSE '02). ACM, New York, NY, USA, 503-512.",
"mockus10": "Audris Mockus. 2010. [Organizational volatility and its effects on software defects|http://doi.acm.org/10.1145/1882291.1882311]. In Proceedings of the eighteenth ACM SIGSOFT international symposium on Foundations of software engineering (FSE '10). ACM, New York, NY, USA, 117-126.",
"mohanani14": "Rahul Mohanani, Paul Ralph, and Ben Shreeve. 2014. [Requirements fixation|http://dx.doi.org/10.1145/2568225.2568235]. In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 895-906.",
"murphy14": "Emerson Murphy-Hill, Thomas Zimmermann, and Nachiappan Nagappan. 2014. [Cowboys, ankle sprains, and keepers of quality: how is video game development different from software development?|http://dx.doi.org/10.1145/2568225.2568226]. In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 1-11.",
"murphyhill09": "Emerson Murphy-Hill, Chris Parnin, and Andrew P. Black. 2009. [How we refactor, and how we know it|http://dx.doi.org/10.1109/ICSE.2009.5070529]. In Proceedings of the 31st International Conference on Software Engineering (ICSE '09). IEEE Computer Society, Washington, DC, USA, 287-297.",
"murphyhill13": "Emerson Murphy-Hill, Thomas Zimmermann, Christian Bird, and Nachiappan Nagappan. 2013. [The design of bug fixes|http://dl.acm.org/citation.cfm?id=2486833]. In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 332-341.",
"müller03": "Matthias M. Müller and Frank Padberg. 2003. [On the economic evaluation of XP projects|http://dx.doi.org/10.1145/940071.940094]. In Proceedings of the 9th European software engineering conference held jointly with 11th ACM SIGSOFT international symposium on Foundations of software engineering (ESEC/FSE-11). ACM, New York, NY, USA, 168-177.",
"ng06": "T. H. Ng, S. C. Cheung, W. K. Chan, and Y. T. Yu. 2006. [Work experience versus refactoring to design patterns: a controlled experiment|http://dx.doi.org/10.1145/1181775.1181778]. In Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering (SIGSOFT '06/FSE-14). ACM, New York, NY, USA, 12-22.",
"norris17": "[Tech Leadership with Jeff Norris|https://softwareengineeringdaily.com/2016/09/22/tech-leadership-with-jeff-norris/] (2017). Software Engineering Daily.",
"northrup16": "Software Engineering Daily (2016). [Reflections of an Old Programmer|https://softwareengineeringdaily.com/2016/11/09/reflections-of-an-old-programmer-with-ben-northrup/]",
"osterwalder15": "A. Osterwalder, Y. Pigneur, G. Bernarda, & A. Smith (2015). [Value proposition design: how to create products and services customers want|https://books.google.com/books?id=jgu5BAAAQBAJ]. John Wiley & Sons.",
"overney20": "Overney, C., Meinicke, J., Kästner, C., & Vasilescu, B. (2020). [How to Not Get Rich: An Empirical Study of Donations in Open Source|https://cmustrudel.github.io/papers/overney20donations.pdf]. International Conference on Software Engineering.",
"parnas86": "Parnas, D. L., & Clements, P. C. (1986). [A rational design process: How and why to fake it|https://doi.org/10.1109/TSE.1986.6312940]. IEEE Transactions on Software Engineering, (2), 251-257.",
"perlow99": "Perlow, L. A. (1999). [The time famine: Toward a sociology of work time|http://journals.sagepub.com/doi/abs/10.2307/2667031]. Administrative science quarterly, 44(1), 57-81.",
"petre13": "Marian Petre. 2013. [UML in practice|https://doi.org/10.1145/2568225.2568285]. In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 722-731.",
"petre16": "Petre, M., van der Hoek, A., & Quach, Y. (2016). [Software Design Decoded: 66 Ways Experts Think|https://www.google.com/books/edition/_/EVE4DQAAQBAJ]. MIT Press.",
"pettersen16": "[Git Workflows with Tim Pettersen|https://softwareengineeringdaily.com/2016/04/06/git-workflows-tim-pettersen/] (2016). Software Engineering Daily.",
"pham14": "Raphael Pham, Stephan Kiesling, Olga Liskin, Leif Singer, and Kurt Schneider. 2014. [Enablers, inhibitors, and perceptions of testing in novice software teams|http://dx.doi.org/10.1145/2635868.2635925]. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2014). ACM, New York, NY, USA, 30-40.",
"pikkarainen98": "Pikkarainen, M., Haikara, J., Salo, O., Abrahamsson, P., & Still, J. (2008). [The impact of agile practices on communication in software development|http://dl.acm.org/citation.cfm?id=1380667]. Empirical Software Engineering, 13(3), 303-337.",
"prince17": "[Product Management with Suzie Prince|https://softwareengineeringdaily.com/2017/01/18/product-management-with-suzie-prince/], Software Engineering Daily.",
"procaccino05": "Procaccino, J. D., Verner, J. M., Shelfer, K. M., & Gefen, D. (2005). [What do software practitioners really think about project success: an exploratory study|http://www.sciencedirect.com/science/article/pii/S0164121204002614]. Journal of Systems and Software, 78(2), 194-203.",
"qiu13": "Dong Qiu, Bixin Li, and Zhendong Su. 2013. [An empirical analysis of the co-evolution of schema and code in database applications|http://dx.doi.org/10.1145/2491411.2491431]. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2013). ACM, New York, NY, USA, 125-135.",
"qureshi16": "[Debugging Stories with Haseeb Qureshi|https://softwareengineeringdaily.com/2016/11/19/debugging-stories-with-haseeb-qureshi/]. Software Engineering Daily.",
"ralph14": "Paul Ralph and Paul Kelly. 2014. [The dimensions of software engineering success|http://dx.doi.org/10.1145/2568225.2568261]. In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 24-35.",
"ramasubbu11": "Narayan Ramasubbu, Marcelo Cataldo, Rajesh Krishna Balan, and James D. Herbsleb. 2011. [Configuring global software teams: a multi-company analysis of project productivity, quality, and profits|https://doi.org/10.1145/1985793.1985830]. In Proceedings of the 33rd International Conference on Software Engineering (ICSE '11). ACM, New York, NY, USA, 261-270.",
"ray14": "Baishakhi Ray, Daryl Posnett, Vladimir Filkov, and Premkumar Devanbu. 2014. [A large scale study of programming languages and code quality in GitHub|http://dx.doi.org/10.1145/2635868.2635922]. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2014). ACM, New York, NY, USA, 155-165.",
"rigby11": "Peter C. Rigby and Margaret-Anne Storey. 2011. [Understanding broadcast based peer review on open source software projects|https://doi.org/10.1145/1985793.1985867]. In Proceedings of the 33rd International Conference on Software Engineering (ICSE '11). ACM, New York, NY, USA, 541-550.",
"rigby13": "Peter C. Rigby and Christian Bird. 2013. [Convergent contemporary software peer review practices|http://dx.doi.org/10.1145/2491411.2491444]. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2013). ACM, New York, NY, USA, 202-212.",
"rigby16": "Peter C. Rigby, Yue Cai Zhu, Samuel M. Donadelli, and Audris Mockus. 2016. [Quantifying and mitigating turnover-induced knowledge loss: case studies of chrome and a project at Avaya|https://doi.org/10.1145/2884781.2884851]. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 1006-1016.",
"roehm12": "Tobias Roehm, Rebecca Tiarks, Rainer Koschke, and Walid Maalej. 2012. http://dl.acm.org/citation.cfm?id=2337254 How do professional developers comprehend software? In Proceedings of the 34th International Conference on Software Engineering (ICSE '12). IEEE Press, Piscataway, NJ, USA, 255-265.",
"rubin16": "Julia Rubin and Martin Rinard. 2016. [The challenges of staying together while moving fast: an exploratory study|https://doi.org/10.1145/2884781.2884871]. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 982-993.",
"salvaneschi14": "Guido Salvaneschi, Sven Amann, Sebastian Proksch, and Mira Mezini. 2014. https://doi.org/10.1145/2635868.2635895 An empirical study on program comprehension with reactive programming. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2014). ACM, New York, NY, USA, 564-575.",
"santos16": "Ronnie E. S. Santos, Fabio Q. B. da Silva, Cleyton V. C. de Magalhães, and Cleviton V. F. Monteiro. 2016. [Building a theory of job rotation in software engineering from an instrumental case study|https://doi.org/10.1145/2884781.2884837]. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 971-981.",
"schiller14": "Schiller, T. W., Donohue, K., Coward, F., & Ernst, M. D. (2014). [Case studies and tools for contract specifications|https://doi.org/10.1145/2568225.2568285]. In Proceedings of the 36th International Conference on Software Engineering (pp. 596-607).",
"seaman97": "Carolyn B. Seaman and Victor R. Basili. 1997. [An empirical study of communication in code inspections|http://dx.doi.org/10.1145/253228.253248]. In Proceedings of the 19th international conference on Software engineering (ICSE '97). ACM, New York, NY, USA, 96-106.",
"sedano17": "Sedano, T., Ralph, P., & P&eacute;raire, C. (2017). [Software development waste|http://dl.acm.org/citation.cfm?id=3097385]. In Proceedings of the 39th International Conference on Software Engineering (pp. 130-140). IEEE Press.",
"sfetsos09": "Sfetsos, P., Stamelos, I., Angelis, L., & Deligiannis, I. (2009). [An experimental investigation of personality types impact on pair effectiveness in pair programming|https://link.springer.com/article/10.1007/s10664-008-9093-5]. Empirical Software Engineering, 14(2), 187.",
"sharp04": "Sharp, H., & Robinson, H. (2004). [An ethnographic study of XP practice|https://doi.org/10.1023/B:EMSE.0000039884.79385.54]. Empirical Software Engineering, 9(4), 353-375.",
"shetterly17": "Shetterly, M. L. (2017). Hidden figures. HarperCollins Nordic.",
"shrestha20": "Shrestha, N., Botta, C., Barik, T., & Parnin, C. (2020). [Here We Go Again: Why Is It Difficult for Developers to Learn Another Programming Language?|http://nischalshrestha.me/docs/cross_language_interference.pdf]. International Conference on Software Engineering.",
"sillito06": "Jonathan Sillito, Gail C. Murphy, and Kris De Volder. 2006. http://dx.doi.org/10.1145/1181775.1181779 Questions programmers ask during software evolution tasks. In Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering (SIGSOFT '06/FSE-14). ACM, New York, NY, USA, 23-34.",
"silva16": "Danilo Silva, Nikolaos Tsantalis, and Marco Tulio Valente. 2016. [Why we refactor? Confessions of GitHub contributors|https://doi.org/10.1145/2950290.2950305]. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2016). ACM, New York, NY, USA, 858-870.",
"singer14": "Leif Singer, Fernando Figueira Filho, and Margaret-Anne Storey. 2014. [Software engineering at the speed of light: how developers stay current using Twitter|http://dx.doi.org/10.1145/2568225.2568305]. In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 211-221.",
"smite10": "Smite, D., Wohlin, C., Gorschek, T., & Feldt, R. (2010). [Empirical evidence in global software engineering: a systematic review|https://doi.org/10.1007/s10664-009-9123-y]. Empirical software engineering, 15(1), 91-118.",
"somers17": "Somers, James (2017). [The Coming Software Apocalypse|https://www.theatlantic.com/technology/archive/2017/09/saving-the-world-from-code/540393/]. The Atlantic Monthly.",
"sommerville97": "Sommerville, I., & Sawyer, P. (1997). [Requirements engineering: a good practice guide|https://books.google.com/books?id=5NnP-VODEc8C]. John Wiley & Sons, Inc.",
"stefik13": "Andreas Stefik and Susanna Siebert. 2013. https://doi.org/10.1145/2534973 An Empirical Investigation into Programming Language Syntax. ACM Transactions on Computing Education 13, 4, Article 19 (November 2013), 40 pages.",
"stol14": "Klaas-Jan Stol and Brian Fitzgerald. 2014. [Two's company, three's a crowd: a case study of crowdsourcing software development|http://dx.doi.org/10.1145/2568225.2568249]. In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 187-198.",
"stroustrup96": "Stroustrup, B. (1996). [A history of C++: 1979--1991|http://dl.acm.org/citation.cfm?id=1057836]. In History of programming languages---II (pp. 699-769).",
"stylos08": "Jeffrey Stylos and Brad A. Myers. 2008. [The implications of method placement on API learnability|http://dx.doi.org/10.1145/1453101.1453117]. In Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering (SIGSOFT '08/FSE-16). ACM, New York, NY, USA, 105-112.",
"syedabdullah06": "Syed-Abdullah, S., Holcombe, M., & Gheorge, M. (2006). The impact of an agile methodology on the well being of development teams|https://doi.org/10.1007/s10664-009-9123-y]. Empirical Software Engineering, 11(1), 143-167.",
"tao12": "Yida Tao, Yingnong Dang, Tao Xie, Dongmei Zhang, and Sunghun Kim. 2012. http://dx.doi.org/10.1145/2393596.2393656 How do software engineers understand code changes? An exploratory study in industry. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering (FSE '12). ACM, New York, NY, USA, , Article 51 , 11 pages.",
"thongtanunam16": "Thongtanunam, P., McIntosh, S., Hassan, A. E., & Iida, H. (2016). [Review participation in modern code review: An empirical study of the Android, Qt, and OpenStack projects|https://doi.org/10.1007/s10664-016-9452-6]. Empirical Software Engineering.",
"treude09": "Christoph Treude and Margaret-Anne Storey. 2009. [How tagging helps bridge the gap between social and technical aspects in software development|http://dx.doi.org/10.1109/ICSE.2009.5070504]. In Proceedings of the 31st International Conference on Software Engineering (ICSE '09). IEEE Computer Society, Washington, DC, USA, 12-22.",
"treude10": "Christoph Treude and Margaret-Anne Storey. 2010. [Awareness 2.0: staying aware of projects, developers and tasks using dashboards and feeds|http://dx.doi.org/10.1145/1806799.1806854]. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1 (ICSE '10), Vol. 1. ACM, New York, NY, USA, 365-374.",
"treude11": "Christoph Treude and Margaret-Anne Storey. 2011. [Effective communication of software development knowledge through community portals|http://dx.doi.org/10.1145/2025113.2025129]. In Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering (ESEC/FSE '11). ACM, New York, NY, USA, 91-101.",
"turnbull16": "[The Art of Monitoring with James Turnbull|https://softwareengineeringdaily.com/2016/07/28/2739/]. Software Engineering Daily.",
"uemura84": "Keiji Uemura and Miki Ohori. 1984. [A cooperative approach to software development by application engineers and software engineers|http://dl.acm.org/citation.cfm?id=801955]. In Proceedings of the 7th international conference on Software engineering (ICSE '84). IEEE Press, Piscataway, NJ, USA, 86-96.",
"vonmayrhauser94": "A. von Mayrhauser and A. M. Vans. 1994. [Comprehension processes during large scale maintenance|http://dl.acm.org/citation.cfm?id=257741]. In Proceedings of the 16th international conference on Software engineering (ICSE '94). IEEE Computer Society Press, Los Alamitos, CA, USA, 39-48.",
"vosburgh84": "J. Vosburgh, B. Curtis, R. Wolverton, B. Albert, H. Malec, S. Hoben, and Y. Liu. 1984. [Productivity factors and programming environments|http://dl.acm.org/citation.cfm?id=801963]. In Proceedings of the 7th international conference on Software engineering (ICSE '84). IEEE Press, Piscataway, NJ, USA, 143-152.",
"wagstrom14": "Patrick Wagstrom and Subhajit Datta. 2014. [Does latitude hurt while longitude kills? Geographical and temporal separation in a large scale software development project|http://dx.doi.org/10.1145/2568225.2568279]. In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 199-210.",
"walker12": "Robert J. Walker, Shreya Rawal, and Jonathan Sillito. 2012. [Do crosscutting concerns cause modularity problems?|http://dx.doi.org/10.1145/2393596.2393654]. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering (FSE '12). ACM, New York, NY, USA, , Article 49 , 11 pages.",
"wang16": "Software Engineering Daily. [Female Pursuit of Computer Science with Jennifer Wang|https://softwareengineeringdaily.com/2016/06/13/female-pursuit-computer-science-jennifer-wang/].",
"washington20": "Alicia Nicki Washington. 2020. [When Twice as Good Isn't Enough: The Case for Cultural Competence in Computing|https://doi.org/10.1145/3328778.3366792]. Proceedings of the 51st ACM Technical Symposium on Computer Science Education. 2020.",
"weinberg82": "Gerald M. Weinberg. 1982. [Over-structured management of software engineering|https://dl.acm.org/doi/abs/10.5555/800254.807743]. In Proceedings of the 6th international conference on Software engineering (ICSE '82). IEEE Computer Society Press, Los Alamitos, CA, USA, 2-8.",
"weiser81": "Mark Weiser. 1981. http://dl.acm.org/citation.cfm?id=802557 Program slicing. In Proceedings of the 5th international conference on Software engineering (ICSE '81). IEEE Press, Piscataway, NJ, USA, 439-449.",
"wobbrock11": "Wobbrock, J. O., Kane, S. K., Gajos, K. Z., Harada, S., & Froehlich, J. (2011). [Ability-based design: Concept, principles and examples|https://doi.org/10.1145/1952383.1952384]. ACM Transactions on Accessible Computing (TACCESS), 3(3), 9.",
"woodcock09": "Jim Woodcock, Peter Gorm Larsen, Juan Bicarregui, and John Fitzgerald. 2009. [Formal methods: Practice and experience|http://dx.doi.org/10.1145/1592434.1592436]. ACM Computing Surveys 41, 4, Article 19 (October 2009), 36 pages.",
"woodfield81": "S. N. Woodfield, H. E. Dunsmore, and V. Y. Shen. 1981. http://dl.acm.org/citation.cfm?id=802534 The effect of modularization and comments on program comprehension. In Proceedings of the 5th international conference on Software engineering (ICSE '81). IEEE Press, Piscataway, NJ, USA, 215-223.",
"xia17": "Xia, X., Bao, L., Lo, D., Kochhar, P. S., Hassan, A. E., & Xing, Z. (2017). [What do developers search for on the web?|https://link.springer.com/article/10.1007/s10664-017-9514-4]. Empirical Software Engineering, 22(6), 3149-3185.",
"ye03": "Yunwen Ye and Kouichi Kishida (2003). [Toward an understanding of the motivation Open Source Software developers|http://dl.acm.org/citation.cfm?id=776867]]. In Proceedings of the 25th International Conference on Software Engineering, 419-429.",
"yin11": "Zuoning Yin, Ding Yuan, Yuanyuan Zhou, Shankar Pasupathy, and Lakshmi Bairavasundaram. 2011. [How do fixes become bugs?|http://dx.doi.org/10.1145/2025113.2025121] In Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering (ESEC/FSE '11). ACM, New York, NY, USA, 26-36.",
"zeller02": "Andreas Zeller. 2002. [Isolating cause-effect chains from computer programs|http://dx.doi.org/10.1145/587051.587053]. In Proceedings of the 10th ACM SIGSOFT symposium on Foundations of software engineering (SIGSOFT '02/FSE-10). ACM, New York, NY, USA, 1-10.",
"zeller09": "Zeller, A. (2009). [Why programs fail: a guide to systematic debugging|https://www.google.com/books/edition/_/_63Bm4LAdDIC]. Elsevier.",
"zhou11": "Minghui Zhou and Audris Mockus. 2011. [Does the initial environment impact the future of developers?|https://doi.org/10.1145/1985793.1985831] In Proceedings of the 33rd International Conference on Software Engineering (ICSE '11). ACM, New York, NY, USA, 271-280."
}
}

27
chapters/architecture.md Normal file
View file

@ -0,0 +1,27 @@
Once you have a sense of what your design must do (in the form of requirements or other less formal specifications), the next big problem is one of organization. How will you order all of the different data, algorithms, and control implied by your requirements? With a small program of a few hundred lines, you can get away without much organization, but as programs scale, they quickly become impossible to manage alone, let alone with multiple developers. Much of this challenge occurs because requirements _change_, and every time they do, code has to change to accommodate. The more code there is and the more entangled it is, the harder it is to change and more likely you are to break things.
This is where *architecture* comes in. Architecture is a way of organizing code, just like building architecture is a way of organizing space. The idea of software architecture has at its foundation a principle of *information hiding*: the less a part of a program knows about other parts of a program, the easier it is to change. The most popular information hiding strategy is *encapsulation*: this is the idea of designing self-contained abstractions with well-defined interfaces that separate different concerns in a program. Programming languages offer encapsulation support through things like *functions* and *classes*, which encapsulate data and functionality together. Another programming language encapsulation method is *scoping*, which hides variables and other names from other parts of program outside a scope. All of these strategies attempt to encourage developers to maximize information hiding and separation of concerns. If you get your encapsulation right, you should be able to easily make changes to a program's behavior without having to change _everything_ about it's implementation.
When encapsulation strategies fail, one can end up with what some affectionately call a "ball of mud" architecture or "spaghetti code". Ball of mud architectures have no apparent organization, which makes it difficult to comprehend how parts of its implementation interact. A more precise concept that can help explain this disorder is *cross-cutting concerns*, which are things like features and functionality that span multiple different components of a system, or even an entire system. There is some evidence that cross-cutting concerns can lead to difficulties in program comprehension and long-term design degradation <walker12>, all of which reduce productivity and increase the risk of defects. As long-lived systems get harder to change, they can take on _technical debt_, which is the degree to which an implementation is out of sync with a team's understanding of what a product is intended to be. Many developers view such debt as emerging from primarily from poor architectural decisions <ernst15>. Over time, this debt can further result in organizational challenges <khadka14>, making change even more difficult.
The preventative solution to this problems is to try to design architecture up front, mitigating the various risks that come from cross-cutting concerns (defects, low modifiability, etc.) <fairbanks10>. A popular method in the 1990's was the [Unified Modeling Language|https://en.wikipedia.org/wiki/Unified_Modeling_Language] (UML), which was a series of notations for expressing the architectural design of a system before implementing it. Recent studies show that UML generally not used and generally not universal <petre13>. While these formal representations have generally not been adopted, informal, natural language architectural specifications are still widely used. For example, [Google engineers write design specifications|https://www.industrialempathy.com/posts/design-docs-at-google/] to sort through ambiguities, consider alternatives, and clarify the volume of work required. A study of developers' perceptions of the value of documentation also reinforced that many forms of documentation, including code comments, style guides, requirements specifications, installation guides, and API references, are viewed as critical, and are only viewed as less valuable because teams do not adequately maintain them <aghajani20>.
More recent developers have investigated ideas of *architectural styles*, which are patterns of interactions and information exchange between encapsulated components. Some common architectural styles include:
* *Client/server*, in which data is transacted in response to requests. This is the basis of the Internet and cloud computing <cito15>.
* *Pipe and filter*, in which data is passed from component to component, and transformed and filtered along the way. Command lines, compilers, and machine learned programs are examples of pipe and filter architectures.
* *Model-view-controller (MVC)*, in which data is separated from views of the data and from manipulations of data. Nearly all user interface toolkits use MVC, including popular modern frameworks such as React.
* *Peer to peer (P2P)*, in which components transact data through a distributed standard interface. Examples include Bitcoin, Spotify, and Gnutella._
* *Event-driven*, in which some components "broadcast" events and others "subscribe" to notifications of these events. Examples include most model-view-controller-based user interface frameworks, which have models broadest change events to views, so they may update themselves to render new model state.
Architectural styles come in all shapes and sizes. Some are smaller design patterns of information sharing <beck96>, whereas others are ubiquitous but specialized patterns such as the architectures required to support undo and cancel in user interfaces <bass03>.
One fundamental unit of which an architecture is composed is a *component*. This is basically a word that refers to any abstraction&mdash;any code, really&mdash;that attempts to _encapsulate_ some well defined functionality or behavior separate from other functionality and behavior. For example, consider the Java class _Math_: it encapsulates a wide range of related mathematical functions. This class has an interface that decide how it can communicate with other components (sending arguments to a math function and getting a return value). Components can be more than classes though: they might be a data structure, a set of functions, a library, an API, or even something like a web service. All of these are abstractions that encapsulate interrelated computation and state for some well-define purpose.
The second fundamental unit of architecture is *connectors*. Connectors are code that transmit information _between_ components. They're brokers that connect components, but do not necessarily have meaningful behaviors or states of their own. Connectors can be things like function calls, web service API calls, events, requests, and so on. None of these mechanisms store state or functionality themselves; instead, they are the things that tie components functionality and state together.
Even with carefully selected architectures, systems can still be difficult to put together, leading to *architectural mismatch* <garlan95>. When mismatch occurs, connecting two styles can require dramatic amounts of code to connect, imposing significant risk of defects and cost of maintenance. One common example of mismatches occurs with the ubiquitous use of database schemas with client/server web-applications. A single change in a database schema can often result in dramatic changes in an application, as every line of code that uses that part of the scheme either directly or indirectly must be updated <qiu13>. This kind of mismatch occurs because the component that manages data (the database) and the component that renders data (the user interface) is highly "coupled" with the database schema: the user interface needs to know _a lot_ about the data, its meaning, and its structure in order to render it meaningfully.
The most common approach to dealing with both architectural mismatch and the changing of requirements over time is *refactoring*, which means changing the _architecture_ of an implementation without changing its behavior. Refactoring is something most developers do as part of changing a system <murphyhill09,silva16>. Refactoring code to eliminate mismatch and technical debt can simplify change in the future, saving time <ng06> and prevent future defects <kim12>. However, because refactoring remains challenging, the difficulty of changing an architecture is often used as a rationale for rejecting demands for change from users. For example, Google does not allow one to change their Gmail address, which greatly harms people who have changed their name (such as this author when she came out as a trans woman), forcing them to either live with an address that includes their old name, or abandon their Google account, with no ability to transfer documents or settings. The rationale for this has nothing to do with policy and everything to do with the fact that the original architecture of Gmail treats the email address as a stable, unique identifier for an account. Changing this basic assumption throughout Gmail's implementation would be an immense refactoring task.
Research on the actual activity of software architecture is actually somewhat sparse. One of the more recent syntheses of this work is Petre et al.'s book, _Software Design Decoded_ <petre16>, which distills many of the practices and skills of software design into a set of succinct ideas. For example, the book states, "_Every design problem has multiple, if not infinite, ways of solving it. Experts strongly prefer simpler solutions over complex ones, for they know that such solutions are easier to understand and change in the future._" And yet, in practice, studies of how projects use APIs often show that developers do the exact opposite, building projects with dependencies on large numbers of sometimes trivial APIs. Some behavior suggests that while software _architects_ like simplicity of implementation, software _developers_ are often choosing whatever is easiest to build, rather than whatever is least risky to maintain over time <abdalkareem17>.

17
chapters/communication.md Normal file
View file

@ -0,0 +1,17 @@
Because software engineering often times distributes work across multiple people, a fundamental challenge in software engineering is ensuring that everyone on a team has the same understanding of what is being built and why. In the seminal book _The Mythical Man Month_, Fred Brooks argued that good software needs to have *conceptual integrity*, both in how it is designed, but also how it is implemented <brooks95>. This is the idea that whatever vision of what is being built must stay intact, even as the building of it gets distributed to multiple people. When multiple people are responsible for implementing a single coherent idea, how can they ensure they all build the same idea?
The solution is effective communication. As [some events|https://www.nytimes.com/2017/08/12/upshot/techs-damaging-myth-of-the-loner-genius-nerd.html] in industry have shown, communication requires empathy and teamwork. When communication is poor, teams become disconnected and produce software defects <bettenburg13>. Therefore, achieving effective communication practices is paramount.
It turns out, however, that communication plays such a powerful role in software projects that it even shapes how projects unfold. Perhaps the most notable theory about the effect of communication is Conway's Law <conway68>. This theory argues that any designed system--software included--will reflect the communication structures involved in producing it. For example, think back to any course project where you divided the work into chunks and tried to combine them together into a final report at the end. The report and its structure probably mirrored the fact that several distinct people worked on each section of the report, rather than sounding like a single coherent voice. The same things happen in software: if the team writing error messages for a website isn't talking to the team presenting them, you're probably going to get a lot of error messages that aren't so clear, may not fit on screen, and may not be phrased using the terminology of the rest of the site. On the other hand, if those two teams meet regularly to design the error mesages together, communicating their shared knowledge, they might produce a seamless, coherent experience. Not only does software follow this law when a project is created, they also follow this law as projects evolve over time <zhou11>.
Because communication is so central, software engineers are constantly seeking information to further their work, going to their coworkers' desks, emailing them, chatting via messaging platforms, and even using social media <ko07>. Some of the information that developers are seeking is easier to find than others. For example, in the study I just cited, it was pretty trivial to find information about how wrote a line of code or whether a build was done, but when the information they needed resided in someone else's head (e.g., _why_ a particular line of code was written), it was slow or often impossible to retrieve it. Sometimes it's not even possible to find out who has the information. Researchers have investigated tools for trying to quantify expertise by automatically analyzing the code that developers have written, building platforms to help developers search for other developers who might know what they need to know <mockus02,begel10>.
Communication is not always effective. In fact, there are many kinds of communication that are highly problematic in software engineering teams. For example, Perlow <perlow99> conducted an [ethnography|https://en.wikipedia.org/wiki/Ethnography] of one team and found a highly dysfunctional use of interruptions in which the most expert members of a team were constantly interrupted to &ldquo;fight fires&rdquo; (immediately address critical problems) in other parts of the organization, and then the organization rewarded them for their heroics. This not only made the most expert engineers less productive, but it also disincentivized the rest of the organization to find effective ways of _preventing_ the disasters from occurring in the first place. Not all interruptions are bad, and they can increase productivity, but they do increase stress <mark08>.
Communication isn't just about transmitting information; it's also about relationships and identity. For example, the dominant culture of many software engineering work environments--and even the _perceived_ culture--is one that can deter many people from even pursuing careers in computer science. Modern work environments are still dominated by men, who speak loudly, out of turn, and disrespectfully, with sometimes even [sexual harassment|https://www.susanjfowler.com/blog/2017/2/19/reflecting-on-one-very-strange-year-at-uber] <wang16>. Computer science as a discipline, and the software industry that it shapes, has only just begun to consider the urgent need for _cultural competence_ (the ability for individuals and organizations to work effectively when their employee's thoughts, communications, actions, customs, beliefs, values, religions, and social groups vary) <washington20>. Similarly, software developers often have to work with people in other domains such as artists, content developers, data scientists, design researchers, designers, electrical engineers, mechanical engineers, product planners, program managers, and service engineers. One study found that developers' cross-disciplinary collaborations with people in these other domains required open-mindedness about the input of others, proactively informing everyone about code-related constraints, and ultimately seeing the broader picture of how pieces from different disciplines fit together; when developers didn't do these things, collaborations failed, and therefore projects failed <li17>. These are not the conditions for trusting, effective communication.
When communication is effective, it still takes time. One of the key strategies for reducing the amount of communication necessary is _knowledge sharing_ tools, which broadly refers to any information system that stores facts that developers would normally have to retrieve from a person. By storing them in a database and making them easy to search, teams can avoid interruptions. The most common knowledge sharing tools in software teams are issue trackers, which are often at the center of communication not only between developers, but also with every other part of a software organization <bertram10>. Community portals, such as GitHub pages or Slack teams, can also be effective ways of sharing documents and archiving decisions <treude11>. Perhaps the most popular knowledge sharing tool in software engineering today is [Stack Overflow|https://stackoverflow.com] <atwood16>, which archives facts about programming language and API usage. Such sites, while they can be great resources, have the same problems as many media, such as gender bias that prevent contributions from women from being rewarded as highly as contributions from men <may19>.
Because all of this knowledge is so critical to progress, when developers leave an organization and haven't archived their knowledge somewhere, it can be quite disruptive to progress. Organizations often have single points of failure, in which a single developer may be critical to a team's ability to maintain and enhance a software product <rigby16>. When newcomers join a team and lack the right knowledge, they introduce defects <foucault15>. Some companies try to mitigate this by rotating developers between projects, &ldquo;cross-training&rdquo; them to ensure that the necessary knowledge to maintain a project is distributed across multiple engineers.
What does all of this mean for you as an individual developer? To put it simply, don't underestimate the importance of talking. Know who you need to talk to, talk to them frequently, and to the extent that you can, write down what you know both to lessen the demand for talking and mitigate the risk of you not being available, but also to make your knowledge more precise and accessible in the future. It often takes decades for engineers to excel at communication. The very fact that you know why communication is important gives you an critical head start.

58
chapters/comprehension.md Normal file
View file

@ -0,0 +1,58 @@
Despite all of the activities that we've talked about so far&mdash;communicating, coordinating, planning, designing, architecting&mdash;really, most of a software engineers time is spent reading code <maalej14>. Sometimes this is their own code, which makes this reading easier. Most of the time, it is someone else's code, whether it's a teammate's, or part of a library or API you're using. We call this reading *program comprehension*.
Being good at program comprehension is a critical skill. You need to be able to read a function and know what it will do with its inputs; you need to be able to read a class and understand its state and functionality; you also need to be able to comprehend a whole implementation, understanding its architecture. Without these skills, you can't test well, you can't debug well, and you can't fix or enhance the systems you're building or maintaining. In fact, studies of software engineers' first year at their first job show that a significant majority of their time is spent trying to simply comprehend the architecture of the system they are building or maintaining and understanding the processes that are being followed to modify and enhance them <dagenais10>.
What's going on when developers comprehend code? Usually, developers are trying to answer questions about code that help them build larger models of how a program works. Because program comprehension is hard, they avoid it when they can, relying on explanations from other developers rather than trying to build precise models of how a program works on their own <roehm12>. When they do try to comprehend code, developers are usually trying to answer questions. Several studies have many general questions that developers must be able to answer in order to understand programs <sillito06,latoza10>. Here are nearly forty common questions that developers ask:
1. Which type represents this domain concept or this UI element or action?
2. Where in the code is the text in this error message or UI element?
3. Where is there any code involved in the implementation of this behavior?
4. Is there an entity named something like this in that unit (for example in a project, package or class)?
5. What are the parts of this type?
6. Which types is this type a part of?
7. Where does this type fit in the type hierarchy?
8. Does this type have any siblings in the type hierarchy?
9. Where is this field declared in the type hierarchy?
10. Who implements this interface or these abstract methods?
11. Where is this method called or type referenced?
12. When during the execution is this method called?
13. Where are instances of this class created?
14. Where is this variable or data structure being accessed?
15. What data can we access from this object?
16. What does the declaration or definition of this look like?
17. What are the arguments to this function?
18. What are the values of these arguments at runtime?
19. What data is being modified in this code?
20. How are instances of these types created and assembled?
21. How are these types or objects related?
22. How is this feature or concern (object ownership, UI control, etc) implemented?
23. What in this structure distinguishes these cases?
24. What is the "correct" way to use or access this data structure?
25. How does this data structure look at runtime?
26. How can data be passed to (or accessed at) this point in the code?
27. How is control getting (from here to) here?
28. Why isn't control reaching this point in the code?
29. Which execution path is being taken in this case?
30. Under what circumstances is this method called or exception thrown?
31. What parts of this data structure are accessed in this code?
32. How does the system behavior vary over these types or cases?
33. What are the differences between these files or types?
34. What is the difference between these similar parts of the code (e.g., between sets of methods)?
35. What is the mapping between these UI types and these model types?
36. How can we know this object has been created and initialized correctly?
If you think about the diversity of questions in this list, you can see why program comprehension requires expertise. You not only need to understand programming languages quite well, but you also need to have strategies for answering all of the questions above (and more) quickly, effectively, and accurately.
So how do developers go about answering these questions? Studies comparing experts and novices show that experts use prior knowledge that they have about architecture, design patterns, and the problem domain a program is built for to know what questions to ask and how to answer them, whereas novices use surface features of code, which leads them to spend considerable time reading code that is irrelevant to a question <vonmayrhauser94,latoza07>. Reading and comprehending source code is fundamentally different from those of reading and comprehending natural language <binkley13>; what experts are doing is ultimately reasoning about *dependencies* between code <weiser81>. Dependencies include things like *data dependencies* (where a variable is used to compute something, what modifies a data structure, how data flows through a program, etc.) and *control dependencies* (which components call which functions, which events can trigger a function to be called, how a function is reached, etc.). All of the questions above fundamentally get at different types of data and control dependencies. In fact, theories of how developers navigate code by following these dependencies are highly predictive of what information a developer will seek next <fleming13>, suggesting that expert behavior is highly procedural. This work, and work explicitly investigating the role of identifier names <lawrie06>, finds that names are actually critical to facilitating higher level comprehension of program behavior.
Of course, program comprehension is not an inherently individual process either. Expert developers are resourceful, and frequently ask others for explanations of program behavior. Some of this might happen between coworkers, where someone seeking insight asks other engineers for summaries of program behavior, to accelerate their learning <ko07>. Others might rely on public forums, such as Stack Overflow, for explanations of API behavior <mamykina11>. These social help seeking strategies are strongly mediated by a developers' willingness to express that they need help to more expert teammates. Some research, for example, has found that junior developers are reluctant to ask for help out of fear of looking incompetent, even when everyone on a team is willing to offer help and their manager prefers that the developer prioritize productivity over fear of stigma <begel08>. And then, of course, learning is just hard. For example, one study investigated the challenges that developers face in learning new programming languages, finding that unlearning old habits, shifting to new language paradigms, learning new terminology, and adjusting to new tools all required materials that could bridge from their prior knowledge to the new language, but few such materials existed <shrestha20>. These findings suggest the critical importance of teams ensuring that newcomers view them as psychologically safe places, where vulnerable actions like expressing a need for help will not be punished, ridiculed, or shamed, but rather validated, celebrated, and encouraged.
While much of program comprehension is individual and social skill, some aspects of program comprehension are determined by the design of programming languages. For example, some programming languages result in programs that are more comprehensible. One framework called the _Cognitive Dimensions of Notations_ <green89> lays out some of the tradeoffs in programming language design that result in these differences in comprehensibility. For example, one of the dimensions in the framework is *consistency*, which refers to how much of a notation can be _guessed_ based on an initial understanding of a language. JavaScript has low consistency because of operators like `==`, which behave differently depending on what the type of the left and right operands are. Knowing the behavior for Booleans doesn't tell you the behavior for a Boolean being compared to an integer. In contrast, Java is a high consistency language: `==` is only ever valid when both operands are of the same type.
These differences in notation can have some impact. Encapsulation through data structures leads to better comprehension that monolithic or purely functional languages <woodfield81,bhattacharya11>. Declarative programming paradigms (like CSS or HTML) have greater comprehensibility than imperative languages <salvaneschi14>. Statically typed languages like Java (which require developers to declare the data type of all variables) result in fewer defects <ray14>, better comprehensibility because of the ability to construct better documentation <endrikat14>, and result in easier debugging <hanenberg13>. In fact, studies of more dynamic languages like JavaScript and Smalltalk <callaú13> show that the dynamic features of these languages aren't really used all that much anyway. Despite all of these measurable differences, the impact of notation seems to be modest in practice <ray14>. All of this evidence suggests that that the more you tell a compiler about what your code means (by declaring types, writing functional specifications, etc.), the more it helps the other developers know what it means too, but that this doesn't translate into huge differences in defects.
Code editors, development environments, and program comprehension tools can also be helpful. Early evidence showed that simple features like syntax highlighting and careful typographic choices can improve the speed of program comprehension <baecker88>. I have also worked on several tools to support program comprehension, including the Whyline, which automates many of the more challenging aspects of navigating dependencies in code, and visualizes them <ko09>:
|https://www.youtube.com/embed/pbElN8nfe3k|The Whyline for Java|The Whyline for Java, a debugging tool that faciliates dependency navigation|Amy J. Ko]
The path from novice to expert in program comprehension is one that involves understanding programming language semantics exceedingly well and reading _a lot_ of code, design patterns, and architectures. Anticipate that as you develop these skills, it will take you time to build robust understandings of what a program is doing, slowing down your writing, testing, and debugging.

53
chapters/debugging.md Normal file
View file

@ -0,0 +1,53 @@
Despite all of your hard work at design, implementation, and verification, your software has failed. Somewhere in its implementation there's a line of code, or multiple lines of code, that, given a particular set of inputs, causes the program to fail. How do you find those defective lines of code? You debug, and when you're doing debugging right, you do it systematically <zeller09>. And yet, despite decades of research and practice, most developers have weak debugging skills, don't know how to property use debugging tools, and still rely in basic print statements <beller18>.
To remedy this, let's discuss some of the basic skills involved in debugging.
# Finding the defect
To start, you have to *reproduce* the failure. Failure reproduction is a matter of identifying inputs to the program (whether data it receives upon being executed, user inputs, network traffic, or any other form of input) that causes the failure to occur. If you found this failure while _you_ were executing the program, then you're lucky: you should be able to repeat whatever you just did and identify the inputs or series of inputs that caused the problem, giving you a way of testing that the program no longer fails once you've fixed the defect. If someone else was the one executing the program (for example, a user, or someone on your team), you better hope that they reported clear steps for reproducing the problem. When bug reports lack clear reproduction steps, bugs often can't be fixed <bettenburg08>.
If you can reproduce the problem, the next challenge is to *localize* the defect, trying to identify the cause of the failure in code. There are many different strategies for localizing defects. At the highest level, one can think of this process as a hypothesis testing activity <gilmore91>.
* Observe failure
* Form hypothesis of cause of failure
* Devise a way to test hypothesis, such as analyzing the code you believe caused it or executing the program with the reproduction steps and stopping at the line you believe is wrong.
* If the hypothesis was supported (meaning the program failed for the reason you thought it did), stop. Otherwise, return to 1.
The problems with the strategy above are numerous. First, what if you can't think of a possible cause? Second, what if your hypothesis is way off? You could spend _hours_ generating hypotheses that are completely off base, effectively analyzing all of your code before finding the defect.
Another strategy is working backwards <ko08>.
* Observe failure
* Identify the line of code that caused the failing output
* Identify the lines of code that caused the line of code in step 2 and any data used on the line in step 2
* Repeat three recursively, analyzing all lines of code for defects along the chain of causality
The nice thing about this strategy is that you're _guaranteed_ to find the defect if you can accurately identify the causes of each line of code contributing to the failure. It still requires you to analyze each line of code and potentially execute to it in order to inspect what might be wrong, but it requires potentially less work than guessing. My dissertation work investigated how to automate this strategy, allowing you to simply click on the fault output and then immediately see all upstream causes of it <ko08>.
Yet another strategy called _delta debugging_ is to compare successful and failing executions of the program <zeller02>.
* Identify a successful set of inputs
* Identify a failing set of inputs
* Compare the differences in state from the successful and failing executions
* Identify a change to input that minimizes the differences in states between the two executions
* Variables and values that are different in these two executions contain the defect
This is a powerful strategy, but only when you have successful inputs and when you can automate comparing runs and identifying changes to inputs.
One of the simplest strategies is to work forward:
* Execute the program with the reproduction steps
* Step forward one instruction at a time until the program deviates from intended behavior
* This step that deviates or one of the previous steps caused the failure
This strategy is easy to follow, but can take a _long_ time because there are so many instructions that can execute.
For particularly complex software, it can sometimes be necessary to debug with the help of teammates, helping to generate hypotheses, identify more effective search strategies, or rule out the influence of particular components in a bug <aranda09>.
Ultimately, all of these strategies are essentially search algorithms, seeking the events that occurred while a program executed with a particular set of inputs that caused its output to be incorrect. Because programs execution millions and potentially billions of instructions, these strategies are necessary to reduce the scope of your search. This is where debugging *tools* come in: if you can find a tool that supports an effective strategy, then your work to search through those millions and billions of instructions will be greatly accelerated. This might be a print statement, a breakpoint debugger, a performance profiler, or one of the many advanced debugging tools beginning to emerge from research.
# Fixing defects
Once you've found the defect, what do you do? It turns out that there are usually many ways to repair a defect. How professional developers fix defects depends a lot on the circumstances: if they're near a release, they may not even fix it if it's too risky; if there's no pressure, and the fix requires major changes, they may refactor or even redesign the program to prevent the failure <murphyhill13>. This can be a delicate, risky process: in one study of open source operating systems bug fixes, 27% of the incorrect fixes were made by developers who had never read the source code files they changed, suggesting that key to correct fixes is a deep comprehension of exactly how the defective code is intended to behave <yin11>.
This risks suggest the importance of *impact analysis*, the activity of systematically and precisely analyzing the consequences of some proposed fix. This can involve analyzing dependencies that are affected by a bug fix, re-running manual and automated tests, and perhaps even running users tests to ensure that the way in which you fixed a bug does not inadvertently introduce problems with usability or workflow. Debugging is therefore like surgery: slow, methodical, purposeful, and risk-averse.

31
chapters/history.md Normal file
View file

@ -0,0 +1,31 @@
Computers haven't been around for long. If you read one of the many histories of computing and information, such as James Gleick's _The Information_<gleick11>, Jonathan Grudin's _From Tool to Partner: The Evolution of Human-Computer Interaction_<grudin17>, or Margo Shetterly's _Hidden Figures_<shetterly17>, you'll learn that before _digital_ computers, computers were people, calculating things manually. And that _after_ digital computers, programming wasn't something that many people did. It was reserved for whoever had access to the mainframe and they wrote their programs on punchcards like the one above. Computing was in no way a ubiquitous, democratized activity--it was reserved for the few that could afford and maintain a room-sized machine.
Because programming required such painstaking planning in machine code and computers were slow, most programs were not that complex. Their value was in calculating things faster than a person could do by hand, which meant thousands of calculations in a minute rather than one calculation in a minute. Computer programmers were not solving problems that had no solutions yet; they were translating existing solutions (for example, a quadratic formula) into machine instructions. Their power wasn't in creating new realities or facilitating new tasks, it was accelerating old tasks.
The birth of software engineering, therefore, did not come until programmers started solving problems that _didn't_ have existing solutions, or were new ideas entirely. Most of these were done in academic contexts to develop things like basic operating systems and methods of input and output. These were complex projects, but as research, they didn't need to scale; they just needed to work. It wasn't until the late 1960s when the first truly large software projects were attempted commercially, and software had to actually perform.
The IBM 360 operating system was one of the first big projects of this kind. Suddenly, there were multiple people working on multiple components, all which interacted with one another. Each part of the program needed to coordinate with the others, which usually meant that each part's _authors_ needed to coordinate, and the term _software engineering_ was born. Programmers and academics from around the world, especially those who were working on big projects, created conferences so they could meet and discuss their challenges. In the [first software engineering conference|http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1968.PDF] in 1968, attendees speculated about why projects were shipping late, why they were over budget, and what they could do about it. There was a word for the phrase, and many questions, but few answers.
At the time, one of the key people behind pursuing these answers was [Margaret Hamilton|https://en.wikipedia.org/wiki/Margaret_Hamilton_(scientist)], a computer scientist who was Director of the Software Engineering Division of the MIT Instrumentation Laboratory. One of the lab's key projects in the late 1960's was developing the on-board flight software for the Apollo space program. Hamilton led the development of error detection and recovery, the information displays, the lunar lander, and many other critical components, while managing a team of other computer scientists who helped. It was as part of this project that many of the central problems in software engineering began to emerge, including verification of code, coordination of teams, and managing versions. This led to one of her passions, which was giving software legitimacy as a form of engineering--at the time, it was viewed as routine, uninteresting, and simple work. Her leadership in the field established the field as a core part of systems engineering.
The first conference, the IBM 360 project, and Hamilton's experiences on the Apollo mission identified many problems that had no clear solutions:
* When you're solving a problem that doesn't yet have a solution, what is a good process for building a solution?
* When software does so many different things, how can you know software "works"?
* How can you make progress when _no one_ on the team understands every part of the program?
* When people leave a project, how do you ensure their replacement has all of the knowledge they had?
* When no one understands every part of the program, how do you diagnose defects?
* When people are working in parallel, how do you prevent them from clobbering each other's work?
* If software engineering is about more than coding, what skills does a good coder need to have?
* What kinds of tools and languages can accelerate a programmers work and help them prevent mistakes?
* How can projects not lose sight of the immense complexity of human needs, values, ethics, and policy that interact with engineering decisions?
As it became clear that software was not an incremental change in technology, but a profoundly disruptive one, countless communities began to explore these questions in research and practice. Black American entreprenuers began to explore how to use software to connect and build community well before the internet was ubiquitous, creating some of the first web-scale online communities and forging careers at IBM, ultimately to be suppressed by racism in the workplace and society <mcilwain19>. White entreprenuers in Silicon Valley began to explore ways to bring computing to the masses, bolstered by the immense capital investments of venture capitalists, who saw opportunities for profit through disruption <kenney00>. And academia, which had helped demonstrate the feasibility of computing and established its foundations, began to invent the foundational tools of software engineering including, version control systems, software testing, and a wide array of high-level programming languages such as Fortran <metcalf02>, LISP <mccarthy78>, C++ <stroustrup96> and Smalltalk <kay96>, all of which inspired the design of today's most popular languages, including Java, Python, and JavaScript. And throughout, despite the central role of women in programming the first digital computers, managing the first major software engineering projects, and imagining how software could change the world, women were systematically excluded from all of these efforts, their histories forgotten, erased, and overshadowed by pervasive sexism in commerce and government <abbate12>.
While technical progress has been swift, progress on the _human_ aspects of software engineering, have been more difficult to understand and improve. One of the seminal books on these issues was Fred P. Brooks, Jr.'s _The Mythical Man Month_ <brooks95>. In it, he presented hundreds of claims about software engineering. For example, he hypothesized that adding more programmers to a project would actually make productivity _worse_ at some level, not better, because knowledge sharing would be an immense but necessary burden. He also claimed that the _first_ implementation of a solution is usually terrible and should be treated like a prototype: used for learning and then discarded. These and other claims have been the foundation of decades of years of research, all in search of some deeper answer to the questions above. And only recently have scholars begun to reveal how software and software engineering tends to encode, amplify, and reinforce existing structures and norms of discrimination by encoding it into data, algorithms, and software architectures <benjamin19>. These histories show that, just like any other human activity, there are strong cultural forces that shape how people engineer software together, what they engineer, and what affect that has on society.
If we step even further beyond software engineering as an activity and think more broadly about the role that software is playing in society today, there are also other, newer questions that we've only begun to answer. If every part of society now runs on code, what responsibility do software engineers have to ensure that code is right? What responsibility do software engineers have to avoid algorithmic bias? If our cars are to soon drive us around, who's responsible for the first death: the car, the driver, or the software engineers who built it, or the company that sold it? These ethical questions are in some ways the _future_ of software engineering, likely to shape its regulatory context, its processes, and its responsibilities.
There are also _economic_ roles that software plays in society that it didn't before. Around the world, software is a major source of job growth, but also a major source of automation, eliminating jobs that people used to do. These larger forces that software is playing on the world demand that software engineers have a stronger understanding of the roles that software plays in society, as the decisions that engineers make can have profoundly impactful unintended consequences.
We're nowhere close to having deep answers about these questions, neither the old ones or the new ones. We know _a lot_ about programming languages and _a lot_ about testing. These are areas amenable to automation and so computer science has rapidly improved and accelerated these parts of software engineering. The rest of it, as we shall see in this, has not made much progress. In this class, we'll discuss what we know and the much larger space of what we don't.

44
chapters/monitoring.md Normal file
View file

@ -0,0 +1,44 @@
The first application I ever wrote was a complete and utter failure.
I was an eager eighth grader, full of wonder and excitement about the infinite possibilities in code, with an insatiable desire to build, build, build. I'd made plenty of little games and widgets for myself, but now was my chance to create something for someone else: my friend and I were making a game and he needed a tool to create pixel art for it. We had no money for fancy Adobe licenses, and so I decided to make a tool.
In designing the app, I made every imaginable software engineering mistake. I didn't talk to him about requirements. I didn't test on his computer before sending the finished app. I certainly didn't conduct any usability tests, performance tests, or acceptance tests. The app I ended up shipping was a pure expression of what I wanted to build, not what he needed to be creative or productive. As a result, it was buggy, slow, confusing, and useless, and blinded by my joy of coding, I had no clue.
Now, ideally my "customer" would have reported any of these problems to me right away, and I would have learned some tough lessons about software engineering. But this customer was my best friend, and also a very nice guy. He wasn't about to trash all of my hard work. Instead, he suffered in silence. He struggled to install, struggled to use, and worst of all struggled to create. He produced some amazing art a few weeks after I gave him the app, but it was only after a few months of progress on our game that I learned he hadn't used my app for a single asset, preferring instead to suffer through Microsoft Paint. My app was too buggy, too slow, and too confusing to be useful. I was devastated.
Why didn't I know it was such a complete failure? *Because I wasn't looking*. I'd ignored the ultimate test suite: _my customer_. I'd learned that the only way to really know whether software requirements are right is by watching how it executes in the world through *monitoring* <turnbull16>.
# Discovering Failures
Of course, this is easier said than done. That's because the (ideally) massive numbers of people executing your software is not easily observable <menzies13>. Moreover, each software quality you might want to monitor (performance, functional correctness, usability) requires entirely different methods of observation and analysis. Let's talk about some of the most important qualities to monitor and how to monitor them.
These are some of the easiest failures to detect because they are overt and unambiguous. Microsoft was one of the first organizations to do this comprehensively, building what eventually became known as Windows Error Reporting <glerum09>. It turns out that actually capturing these errors at scale and mining them for repeating, reproducible failures is quite complex, requiring classification, progressive data collection, and many statistical techniques to extract signal from noise. In fact, Microsoft has a dedicated team of data scientists and engineers whose sole job is to manage the error reporting infrastructure, monitor and triage incoming errors, and use trends in errors to make decisions about improvements to future releases and release processes. This is now standard practice in most companies and organizations, including other big software companies (Google, Apple, IBM, etc.), as well as open source projects (eg, Mozilla). In fact, many application development platforms now include this as a standard operating system feature.
Performance, like crashes, kernel panics, and hangs, is easily observable in software, but a bit trickier to characterize as good or bad. How slow is too slow? How bad is it if something is slow occasionally? You'll have to define acceptable thresholds for different use cases to be able to identify problems automatically. Some experts in industry <grabner16> still view this as an art.
It's also hard to monitor performance without actually _harming_ performance. Many tools and web services (e.g., [New Relic|https://newrelic.com/]) are getting better at reducing this overhead and offering real time data about performance problems through sampling.
Monitoring for data breaches, identity theft, and other security and privacy concerns are incredibly important parts of running a service, but also very challenging. This is partly because the tools for doing this monitoring are not yet well integrated, requiring each team to develop its own practices and monitoring infrastructure. But it's also because protecting data and identity is more than just detecting and blocking malicious payloads. It's also about recovering from ones that get through, developing reliable data streams about application network activity, monitoring for anomalies and trends in those streams, and developing practices for tracking and responding to warnings that your monitoring system might generate. Researchers are still actively inventing more scalable, usable, and deployable techniques for all of these activities.
The biggest limitation of the monitoring above is that it only reveals _what_ people are doing with your software, not _why_ they are doing it, or why it has failed. Monitoring can help you know that a problem exists, but it can't tell you why a program failed or why a persona failed to use your software successfully.
# Discovering Missing Requirements
Usability problems and missing features, unlike some of the preceding problems, are even harder to detect or observe, because the only true indicator that something is hard to use is in a user's mind. That said, there are a couple of approaches to detecting the possibility of usability problems.
One is by monitoring application usage. Assuming your users will tolerate being watched, there are many techniques: 1) automatically instrumenting applications for user interaction events, 2) mining events for problematic patterns, and 3) browsing and analyzing patterns for more subjective issues <ivory01>. Modern tools and services like make it easier to capture, store, and analyze this usage data, although they still require you to have some upfront intuition about what to monitor. More advanced, experimental techniques in research automatically analyze undo events as indicators of usability problems <akers09> this work observes that undo is often an indicator of a mistake in creative software, and mistakes are often indicators of usability problems.
All of the usage data above can tell you _what_ your users are doing, but not _why_. For this, you'll need to get explicit feedback from support tickets, support forums, product reviews, and other critiques of user experience. Some of these types of reports go directly to engineering teams, becoming part of bug reporting systems, while others end up in customer service or marketing departments. While all of this data is valuable for monitoring user experience, most companies still do a bad job of using anything but bug reports to improve user experience, overlooking the rich insights in customer service interactions <chilana11>.
Although bug reports are widely used, they have significant problems as a way to monitor: for developers to fix a problem, they need detailed steps to reproduce the problem, or stack traces or other state to help them track down the cause of a problem <bettenburg08>; these are precisely the kinds of information that are hard for users to find and submit, given that most people aren't trained to produce reliable, precise information for failure reproduction. Additionally, once the information is recorded in a bug report, even _interpreting_ the information requires social, organizational, and technical knowledge, meaning that if a problem is not addressed soon, an organization's ability to even interpret what the failure was and what caused it can decay over time <aranda09>. All of these issues can lead to intractable debugging challenges <qureshi16>.
Larger software organizations now employ data scientists to help mitigate these challenges of analyzing and maintaining monitoring data and bug reports. Most of them try to answer questions such as <begel14>:
* "How do users typically use my application?"
* "What parts of a software product are most used and/or loved by customers?"
* "What are best key performance indicators (KPIs) for monitoring services?"
* "What are the common patterns of execution in my application?"
* "How well does test coverage correspond to actual code usage by our customers?"
The most mature data science roles in software engineering teams even have multiple distinct roles, including _Insight Providers_, who gather and analyze data to inform decisions, _Modeling Specialists_, who use their machine learning expertise to build predictive models, _Platform Builders_, who create the infrastructure necessary for gathering data <kim16>. Of course, smaller organizations may have individuals who take on all of these roles. Moreover, not all ways of discovering missing requirements are data science roles. Many companies, for example, have customer experience specialists and community managers, who are less interested in data about experiences and more interested in directly communicating with customers about their experiences. These relational forms of monitoring can be much more effective at revealing software quality issues that aren't as easily observed, such as issues of racial or sexual bias in software or other forms of structural injustices built into the architecture of software.
All of this effort to capture and maintain user feedback can be messy to analyze because it usually comes in the form of natural language text. Services like [AnswerDash|http://answerdash.com] (a company I co-founded) structure this data by organizing requests around frequently asked questions. AnswerDash imposes a little widget on every page in a web application, making it easy for users to submit questions and find answers to previously asked questions. This generates data about the features and use cases that are leading to the most confusion, which types of users are having this confusion, and where in an application the confusion is happening most frequently. This product was based on several years of research in my lab <chilana13>.

40
chapters/organizations.md Normal file
View file

@ -0,0 +1,40 @@
The photo above is a candid shot of some of the software engineers of _AnswerDash_, a company I co-founded in 2012, that was later acquired in 2020. There are a few things to notice in the photograph. First, you see one of the employees explaining something, while others are diligently working off to the side. It's not a huge team; just a few engineers, plus several employees in other parts of the organization in another room. This, as simple as it looks, is pretty much what all software engineering work looks like. Some organizations have one of these teams; others have thousands.
What you _can't_ see is just how much _complexity_ underlies this work. You can't see the organizational structures that exist to manage this complexity. Inside this room and the rooms around it were processes, standards, reviews, workflows, managers, values, culture, decision making, analytics, marketing, sales. And at the center of it were people executing all of these things as well as they could to achieve the organization's goal.
Organizations are a much bigger topic than I could possibly address here. To deeply understand them, you'd need to learn about [organizational studies|https://en.wikipedia.org/wiki/Organizational_studies], [organizational behavior|https://en.wikipedia.org/wiki/Organizational_behavior], [information systems|https://en.wikipedia.org/wiki/Information_system], and business in general.
The subset of this knowledge that's critical to understand about software engineering is limited to a few important concepts. The first and most important concept is that even in software organizations, the point of the company is rarely to make software; it's to provide *value* <osterwalder15>. Software is sometimes the central means to providing that value, but more often than not, it's the _information_ flowing through that software that's the truly valuable piece. [Requirements|requirements], which we will discuss in a later chapter, help engineers organize how software will provide value.
The individuals in a software organization take on different roles to achieve that value. These roles are sometimes spread across different people and sometimes bundled up into one person, depending on how the organization is structured, but the roles are always there. Let's go through each one in detail so you understand how software engineers relate to each role.
* *Marketers* look for opportunities to provide value. In for-profit businesses, this might mean conducting market research, estimating the size of opportunities, identifying audiences, and getting those audiences attention. Non-profits need to do this work as well in order to get their solutions to people, but may be driven more by solving problems than making money.
* *Product* managers decide what value the product will provide, monitoring the marketplace and prioritizing work.
* *Designers* decide _how_ software will provide value. This isn't about code or really even about software; it's about envisioning solutions to problems that people have.
* *Software engineers* write code with other engineers to implement requirements envisioned by designers. If they fail to meet requirements, the design won't be implemented correctly, which will prevent the software from providing value.
* *Sales* takes the product that's been built and try to sell it to the audiences that marketers have identified. They also try to refine an organization's understanding of what the customer wants and needs, providing feedback to marketing, product, and design, which engineers then address.
* *Support* helps the people using the product to use it successfully and, like sales, provides feedback to product, design, and engineering about the product's value (or lack thereof) and it's defects.
As I noted above, sometimes the roles above get merged into individuals. When I was CTO at AnswerDash, I had software engineering roles, design roles, product roles, sales roles, _and_ support roles. This was partly because it was a small company when I was there. As organizations grow, these roles tend to be divided into smaller pieces. This division often means that different parts of the organization don't share knowledge, even when it would be advantageous <chilana11>.
Note that in the division of responsibilities above, software engineers really aren't the designers by default. They don't decide what product is made or what problems that product solves. They may have opinions&mdash;and a great deal of power to enforce their opinions, as the people building the product&mdash;but it's not ultimately their decision.
There are other roles you might be thinking of that I haven't mentioned:
* *Engineering managers* exist in all roles when teams get to a certain size, helping to move information from between higher and lower parts of an organization. Even _engineering_ managers are primarily focused on organizing and prioritizing work, and not doing engineering (#kalliamvakou": "Kalliamvakou et al. 2018)</a>. Much of their time is also spent ensuring every engineer has what they need to be productive, while also managing coordination and interpersonal conflict between engineers.
* *Data scientists*, although a new role, typically _facilitate_ decision making on the part of any of the roles above #begel": "(Begel & Zimmermann 2014)</a>. They might help engineers find bugs, marketers analyze data, track sales targets, mine support data, or inform design decisions. They're experts at using data to accelerate and improve the decisions made by the roles above.
* *Researchers*, also called user researchers, also help people in a software organization make decisions, but usually _product_ decisions, helping marketers, sales, and product managers decide what products to make and who would want them. In many cases, they can complement the work of data scientists, [providing qualitative work to triangulate quantitative data|https://www.linkedin.com/pulse/ux-research-analytics-yann-riche?trk=prof-post].
* *Ethics and policy specialists*, who might come with backgrounds in law, policy, or social science, might shape terms of service, software licenses, algorithmic bias audits, privacy policy compliance, and processes for engaging with stakeholders affected by the software being engineered. Any company that works with data, especially those that work with data at large scales or in contexts with great potential for harm, hate, and abuse, needs significant expertise to anticipate and prevent harm from engineering and design decisions.
Every decision made in a software team is under uncertainty, and so another important concept in organizations is *risk* <boehm91>. It's rarely possible to predict the future, and so organizations must take risks. Much of an organization's function is to mitigate the consequences of risks. Data scientists and researchers mitigate risk by increasing confidence in an organization's understanding of the market and its consumers. Engineers manage risk by trying to avoid defects. Of course, as many popular outlets on software engineering have begun to discover, when software fails, it usually "did exactly what it was told to do. The reason it failed is that it was told to do the wrong thing. <somers17>
Open source communities are organizations too. The core activities of design, engineering, and support still exist in these, but how much a community is engaged in marketing and sales depends entirely on the purpose of the community. Big, established open source projects like [Mozilla|https://mozilla.org] have revenue, buildings, and a CEO, and while they don't sell anything, they do market. Others like Linux <lee03> rely heavily on contributions both from volunteers <ye03>, but also paid employees from companies that depend on Linux, like IBM, Google, and others. In these settings, there are still all of the challenges that come with software engineering, but fewer of the constraints that come from a for-profit or non-profit motive. In fact, recent work empirically uncovered 9 reasons why modern open source projects fail: 1) lost to competition, 2) made obsolete by technology advances, 3) lack of time to volunteer, 4) lack of interest by contributors, 5) outdated technologies, 6) poor maintainability, 7) interpersonal conflicts amongst developers, 8) legal challenges, 9) and acquisition <coelho17>. Another study showed that funding open source projects often requires substantial donations from large corporations; most projects don't ask for donations, and those that do receive very little, unless well-established, and most of those funds go to paying for basic expenses such as engineering salaries <overney20>. Those aren't too different from traditional software organizations, aside from the added challenges of sustaining a volunteer workforce.
All of the above has some important implications for what it means to be a software engineer:
* Engineers are not the only important role in a software organization. In fact, they may be less important to an organization's success than other roles because the decisions they make (how to implement requirements) have smaller impact on the organization's goals than other decisions (what to make, who to sell it to, etc.).
* Engineers have to work with _a lot_ of people working with different roles. Learning what those roles are and what shapes their success is important to being a good collaborator <li17>.
* While engineers might have many great ideas for product, if they really want to shape what they're building, they should be in a product role, not an engineering role.
All that said, without engineers, products wouldn't exist. They ensure that every detail about a product reflects the best knowledge of the people in their organization, and so attention to detail is paramount. In future chapters, we'll discuss all of the ways that software engineers manage this detail, mitigating the burden on their memories with tools and processes.

43
chapters/process.md Normal file
View file

@ -0,0 +1,43 @@
So you know what you're going to build and how you're going to build it. What process should you go about building it? Who's going to build what? What order should you build it in? How do you make sure everyone is in sync while you're building it? <pettersen16> And most importantly, how to do you make sure you build well and on time? These are fundamental questions in software engineering with many potential answers. Unfortunately, we still don't know which of those answers are right.
At the foundation of all of these questions are basic matters of [project management|https://en.wikipedia.org/wiki/Project_management]: plan, execute, and monitor. But developers in the 1970's and on found that traditional project management ideas didn't seem to work. The earliest process ideas followed a "waterfall" model, in which a project begins by identifying requirements, writing specifications, implementing, testing, and releasing, all under the assumption that every stage could be fully tested and verified. (Recognize this? It's the order of topics we're discussing in this class!). Many managers seemed to like the waterfall model because it seemed structured and predictable; however, because most managers were originally software developers, they preferred a structured approach to project management <weinberg82>. The reality, however, was that no matter how much verification one did of each of these steps, there always seemed to be more information in later steps that caused a team to reconsider it's earlier decision (e.g., imagine a customer liked a requirement when it was described in the abstract, but when it was actually built, they rejected it, because they finally saw what the requirement really meant).
In 1988, Barry Boehm proposed an alternative to waterfall called the *Spiral model* <boehm88>: rather than trying to verify every step before proceeding to the next level of detail, _prototype_ every step along the way, getting partial validation, iteratively converging through a series of prototypes toward both an acceptable set of requirements _and_ an acceptable product. Throughout, risk assessment is key, encouraging a team to reflect and revise process based on what they are learning. What was important about these ideas were not the particulars of Boehm's proposed process, but the disruptive idea that iteration and process improvement are critical to engineering great software.
|spiral.png|A spiral, showing successive rounds of prototping and risk analysis.|Boehm's spiral model of software development.|Boehm|
Around the same time, two influential books were published. Fred Brooks wrote *The Mythical Man Month* <brooks95>, a book about software project management, full of provocative ideas that would be tested over the next three decades, including the idea that adding more people to a project would not necessarily increase productivity. Tom DeMarco and Timothy Lister wrote another famous book, *Peopleware: Productive Projects and Teams* <demarco87> arguing that the major challenges in software engineering are human, not technical. Both of these works still represent some of the most widely-read statements of the problem of managing software development.
These early ideas in software project management led to a wide variety of other discoveries about process. For example, organizations of all sizes can improve their process if they are very aware of what the people in the organization know, what it's capable of learning, and if it builds robust processes to actually continually improve process <dybå02,dybå03>. This might mean monitoring the pace of work, incentivizing engineers to reflect on inefficiencies in process, and teaching engineers how to be comfortable with process change.
Beyond process improvement, other factors emerged. For example, researchers discovered that critical to team productivity was *awareness* of teammates' work <ko07>. Teams need tools like dashboards to help make them aware of changing priorities and tools like feeds to coordinate short term work <treude10>. Moreover, researchers found that engineers tended to favor non-social sources such as documentation for factual information, but social sources for information to support problem solving <milewski07>. Decades ago, developers used tools like email and IRC for awareness; now they use tools like [Slack|https://slack.com], [Trello|https://trello.com/], [GitHub|http://github.com], and [JIRA|https://www.atlassian.com/software/jira], which have the same basic functionality, but are much more polished, streamlined, and customizable.
In addition to awareness, *ownership* is a critical idea in process. This is the idea that for every line of code, someone is responsible for it's quality. The owner _might_ be the person who originally wrote the code, but it could also shift to new team members. Studies of code ownership on Windows Vista and Windows 7 found that less a component had a clear owner, the more pre-release defects it had and the more post-release failures were reported by users <bird11>. This means that in addition to getting code written, having clear ownership and clear processes for transfer of ownership are key to functional correctness.
*Pace* is another factor that affects quality. Clearly, there's a tradeoff between how fast a team works and the quality of the product it can build. In fact, interview studies of engineers at Google, Facebook, Microsoft, Intel, and other large companies found that the pressure to reduce "time to market" harmed nearly every aspect of teamwork: the availability and discoverability of information, clear communication, planning, integration with others' work, and code ownership <rubin16>. Not only did a fast pace reduce quality, but it also reduced engineers' personal satisfaction with their job and their work. I encountered similar issues as CTO of my startup: while racing to market, I was often asked to meet impossible deadlines with zero defects and had to constantly communicate to the other executives in the company why this was not possible <ko17>.
Because of the importance of awareness and communication, the *distance* between teammates is also a critical factor. This is most visible in companies that hire remote developers, building distributed teams, or when teams are fully distributed (such as when there is a pandemic requiring social distancing). One motivation for doing this is to reduce costs or gain access to engineering talent that is distant from a team's geographical center, but over time, companies have found that doing so necessitates significant investments in socialization to ensure quality, minimizing geographical, temporal and cultural separation <smite10>. Researchers have found that there appear to be fundamental tradeoffs between productivity, quality, and/or profits in these settings <ramasubbu11>. For example, more distance appears to lead to slower communication <wagstrom14>. Despite these tradeoffs, most rigorous studies of the cost of distributed development have found that when companies work hard to minimize temporal and cultural separation, the actual impact on defects was small <kocaguneli13>. These efforts to minimize separation include more structured onboarding practices, more structured communication, and more structured processes, as well as systematic efforts to build and maintain trusting social relationships. Some researchers have begun to explore even more extreme models of distributed development, hiring contract developers to complete microtasks over a few days without hiring them as employees; early studies suggest that these models have the worst of outcomes, with greater costs, poor scalability, and more significant quality issues <stol14>.
A critical part of ensuring all that a team is successful is having someone responsible for managing these factors of distance, pace, ownership, awareness, and overall process. The most obvious person to oversee this is, of course, a project manager <borozdin17,norris17>. Research on what skills software engineering project managers need suggests that while some technical knowledge is necessary, it the soft skills necessary for managing all of these factors in communication and coordination that distinguish great managers <kalliamvakou17>.
While all of this research has strong implications for practice, industry has largely explored its own ideas about process, devising frameworks that addressed issues of distance, pace, ownership, awareness, and process improvement. Extreme Programming <beck99> was one of these frameworks and it was full of ideas:
* Be iterative
* Do small releases
* Keep design simple
* Write unit tests
* Refactor to iterate
* Use pair programming
* Integrate continuously
* Everyone owns everything
* Use an open workspace
* Work sane hours
Note that none of these had any empirical evidence to back them. Moreover, Beck described in his original proposal that these ideas were best for "_outsourced or in-house development of small- to medium-sized systems where requirements are vague and likely to change_", but as industry often does, it began hyping it as a universal solution to software project management woes and adopted all kinds of combinations of these ideas, adapting them to their existing processes. In reality, the value of XP appears to depend on highly project-specific factors <müller13>, while the core ideas that industry has adopted are valuing feedback, communication, simplicity, and respect for individuals and the team <sharp04>. Researchers continue to investigate the merits of the list above; for example, numerous studies have investigated the effects of pair programming on defects, finding small but measurable benefits <dibella13>.
At the same time, Beck began also espousing the idea of ["Agile" methods|http://agilemanifesto.org/], which celebrated many of the values underlying Extreme Programming, such as focusing on individuals, keeping things simple, collaborating with customers, and being iterative. This idea of begin agile was even more popular and spread widely in industry and research, even though many of the same ideas appeared much earlier in Boehm's work on the Spiral model. Researchers found that Agile methods can increase developer enthusiasm <syedabdullah06>, that agile teams need different roles such as Mentor, Co-ordinator, Translator, Champion, Promoter, and Terminator <hoda10>, and that teams are combing agile methods with all kinds of process ideas from other project management frameworks such as [Scrum|https://en.wikipedia.org/wiki/Scrum_(software_development)] (meet daily to plan work, plan two-week sprints, maintain a backlog of work) and Kanban (visualize the workflow, limit work-in-progress, manage flow, make policies explicit, and implement feedback loops) <albaik15>. Research has also found that transitioning a team to Agile methods is slow and complex because it requires everyone on a team to change their behavior, beliefs, and practices <hoda17>.
Ultimately, all of this energy around process ideas in industry is exciting, but disorganized. None of these efforts really get to the core of what makes software projects difficult to manage. One effort in research to get to this core by contributing new theories that explain these difficulties. The first is Herbsleb's *Socio-Technical Theory of Coordination (STTC)*. The idea of the theory is quite simple: _technical dependencies_ in engineering decisions (e.g., this function calls this other function, this data type stores this other data type) define the _social constraints_ that the organization must solve in a variety of ways to build and maintain software <herbsleb03,herbsleb16>. The better the organization builds processes and awareness tools to ensure that the people who own those engineering dependencies are communicating and aware of each others' work, the fewer defects that will occur. Herbsleb referred this alignment as _sociotechnical congruence_, and conducted a number of studies demonstrating its predictive and explanatory power.
I extended this idea to congruence with beliefs about _product_ value <ko17>, claiming that successful software products require the constant, collective communication and agreement of a coherent proposition of a product's value across UX, design, engineering, product, marketing, sales, support, and even customers. A team needs to achieve Herbsleb's sociotechnical congruence to have a successful product, but that alone is not enough: the rest of the organization has to have a consistent understanding of what is being built and why, even as that understanding evolves over time.

29
chapters/productivity.md Normal file
View file

@ -0,0 +1,29 @@
When we think of productivity, we usually have a vague concept of a rate of work per unit time. Where it gets tricky is in defining "work". On an individual level, work can be easier to define, because developers often have specific concrete tasks that they're assigned. But until they're not, it's not really easy to define progress (well, it's not that easy to define "done" sometimes either, but that's a topic for a later chapter). When you start considering work at the scale of a team or an organization, productivity gets even harder to define, since an individual's productivity might be increased by ignoring every critical request from a teammate, harming the team's overall productivity.
Despite the challenge in defining productivity, there are numerous factors that affect productivity. For example, at the individual level, having the right tools can result in an order of magnitude difference in speed at accomplishing a task. One study I ran found that developers using the Eclipse IDE spent a third of their time just physically navigating between source files <ko05>. With the right navigation aids, developers could be writing code and fixing bugs 30% faster. In fact, some tools like Mylyn automatically bring relevant code to the developer rather than making them navigate to it, greatly increasing the speed which with developers can accomplish a task <kersten06>. Long gone are the days when developers should be using bare command lines and text editors to write code: IDEs can and do greatly increase productivity when used and configured with speed in mind.
Of course, individual productivity is about more than just tools. Studies of workplace productivity show that developers have highly fragmented days, interrupted by meetings, emails, coding, and non-work distractions <meyer17>. These interruptions are often viewed negatively from an individual perspective <northrup16>, but may be highly valuable from a team and organizational perspective. And then, productivity is not just about skills to manage time, but also many other skills that shape developer expertise, including skills in designing architectures, debugging, testing, programming languages, etc. <baltes18>. Hiring is therefore about far more than just how quickly and effectively someone can code <bartram16>.
That said, productivity is not just about individual developers. Because communication is a key part of team productivity, an individual's productivity is as much determined by their ability to collaborate and communicate with other developers. In a study spanning dozens of interviews with senior software engineers, Li et al. found that the majority of critical attributes for software engineering skill (productivity included) concerned their interpersonal skills, their communication skills, and their ability to be resourceful within their organization <li15>. Similarly, LaToza et al. found that the primary bottleneck in productivity was communication with teammates, primarily because waiting for replies was slower than just looking something up <latoza06>. Of course, looking something up has its own problems. While StackOverflow is an incredible resource for missing documentation <mamykina11>, it also is full of all kinds of misleading and incorrect information contributed by developers without sufficient expertise to answer questions <barua14>. Finally, because communication is such a critical part of retrieving information, adding more developers to a team has surprising effects. One study found that adding people to a team slowly enough to allow them to onboard effectively could reduce defects, but adding them too fast led to increases in defects <meneely11>.
Another dimension of productivity is learning. Great engineers are resourceful, quick learners <li15>. New engineers must be even more resourceful, even though their instincts are often to hide their lack of expertise from exactly the people they need help from <begel08>. Experienced developers know that learning is important and now rely heavily on social media such as Twitter to follow industry changes, build learning relationships, and discover new concepts and platforms to learn <singer14>. And, of course, developers now rely heavily on web search to fill in inevitable gaps in their knowledge about APIs, error messages, and myriad other details about languages and platforms <xia17>.
Unfortunately, learning is no easy task. One of my earliest studies as a researcher investigated the barriers to learning new programming languages and systems, finding six distinct types of content that are challenging <ko04>. To use a programming platform successfully, people need to overcome _design _ barriers, which are the abstract computational problems that must be solved, independent of the languages and APIs. People need to overcome _selection _ barriers, which involve finding the right abstractions or APIs to achieve the design they have identified. People need to overcome _use _ and _coordination _ barriers, which involve operating and coordinating different parts of a language or API together to achieve novel functionality. People need to overcome _comprehension _ barriers, which involve knowing what can go wrong when using part of a language or API. And finally, people need to overcome _information _ barriers, which are posed by the limited ability of tools to inspect a program's behavior at runtime during debugging. Every single one of these barriers has its own challenges, and developers encounter them every time they are learning a new platform, regardless of how much expertise they have.
Aside from individual and team factors, productivity is also influenced by the particular features of a project's code, how the project is managed, or the environment and organizational culture in which developers work <vosburgh84,demarco85>. In fact, these might actually be the _biggest _ factors in determining developer productivity. This means that even a developer that is highly productive individually cannot rescue a team that is poorly structured working on poorly architected code. This might be why highly productive developers are so difficult to recruit to poorly managed teams.
A different way to think about productivity is to consider it from a "waste" perspective, in which waste is defined as any activity that does not contribute to a product's value to users or customers. Sedano et al. investigated this view across two years and eight software development projects in a software development consultancy <sedano17>, contributing a taxonomy of waste:
* *Building the wrong feature or product*. The cost of building a feature or product that does not address user or business needs.
* *Mismanaging the backlog*. The cost of duplicating work, expediting lower value user features, or delaying nec- essary bug fixes.
* *Rework*. The cost of altering delivered work that should have been done correctly but was not.
* *Unnecessarily complex solutions*. The cost of creating a more complicated solution than necessary, a missed opportu- nity to simplify features, user interface, or code.
* *Extraneous cognitive load*. The costs of unneeded expenditure of mental energy, such as poorly written code, context switching, confusing APIs, or technical debt.
* *Psychological distress*. The costs of burdening the team with unhelpful stress arising from low morale, pace, or interpersonal conflict.
* *Waiting/multitasking*. The cost of idle time, often hidden by multi-tasking, due to slow tests, missing information, or context switching.
* *Knowledge loss*. The cost of re-acquiring information that the team once knew.
* *Ineffective communication*. The cost of incomplete, incorrect, mislead- ing, inefficient, or absent communication.
One could imagine using these concepts to refine processes and practices in a team, helping both developers and managers be more aware of sources of waste that harm productivity.
Of course, productivity is not only shaped by professional and organizational factors, but personal ones as well. Consider, for example, an engineer that has friends, wealth, health care, health, stable housing, sufficient pay, and safety: they likely have everything they need to bring their full attention to their work. In contrast, imagine an engineer that is isolated, has immense debt, has no health care, has a chronic disease like diabetes, is being displaced from an apartment by gentrification, has lower pay than their peers, or does not feel safe in public. Any one of these factors might limit an engineer's ability to be productive at work; some people might experience multiple, or even all of these factors, especially if they are a person of color in the United States, who has faced a lifetime of racist inequities in school, health care, and housing. Because of the potential for such inequities to influence someone's ability to work, managers and organizations need to make space for surfacing this inequities at work, so that teams can acknowledgement them, plan around them, and ideally address them through targeted supports. Anything less tends to make engineers feel unsupported, which will only decrease their motivation to contribute to a team. These widely varying conceptions of productivity reveal that programming in a software engineering context is about far more than just writing a lot of code. It's about coordinating productively with a team, synchronizing your work with an organizations goals, and most importantly, reflecting on ways to change work to achieve those goals more effectively.

51
chapters/quality.md Normal file
View file

@ -0,0 +1,51 @@
There are numerous ways a software project can fail: projects can be over budget, they can ship late, they can fail to be useful, or they can simply not be useful enough. Evidence clearly shows that success is highly contextual and stakeholder-dependent: success might be financial, social, physical and even emotional, suggesting that software engineering success is a multifaceted variable that cannot explained simply by user satisfaction, profitability or meeting requirements, budgets and schedules <ralph14>.
One of the central reasons for this is that there are many distinct *software qualities* that software can have and depending on the stakeholders, each of these qualities might have more or less importance. For example, a safety critical system such as flight automation software should be reliable and defect-free, but it's okay if it's not particularly learnable&mdash;that's what training is for. A video game, however, should probably be fun and learnable, but it's fine if it ships with a few defects, as long as they don't interfere with fun <murphy14>
There are a surprisingly large number of software qualities <boehm76>. Many concern properties that are intrisinc to a software's implementation:
* *Correctness* is the extent to which a program behaves according to its specification. If specifications are ambiguous, correctness is ambiguous. However, even if a specification is perfectly unambiguous, it might still fail to meet other qualities (e.g., a web site may be built as intended, but still be slow, unusable, and useless.)
* *Reliability* is the extent to which a program behaves the same way over time in the same operating environment. For example, if your online banking app works most of the time, but crashes sometimes, it's not particularly reliable.
* *Robustness* is the extent to which a program can recover from errors or unexpected input. For example, a login form that crashes if an email is formatted properly isn't very robust. A login form that handles _any_ text input is optimally robust. One can make a system more robust by breadth of errors and inputs it can handle in a reasonable way.
* *Performance* is the extent to which a program uses computing resources economically. Synonymous with "fast" and "zippy". Performance is directly determined by how many instructions a program has to execute to accomplish it's operations, but it is difficult to measure because operations, inputs, and the operating environment can vary widely.
* *Portability* is the extent to which an implementation can run on different platforms without being modified. For example, "universal" applications in the Apple ecosystem that can run on iPhones, iPads, and Mac OS without being modified or recompiled are highly portable.
* *Interoperability* is the extent to which a system can seamlessly interact with other systems, typically through the use of standards. For example, some software systems use entirely proprietary and secret data formats and communication protocols. These are less interoperable than systems that use industry-wide standards.
* *Security* is the extent to which only authorized individuals can access a software system's data and computation.
Whereas the above qualities are concerned with how software behaves technically according to specifications, some qualities concern properties of how developers interact with code:
* *Verifiability* is the effort required to verify that software does what it is intended to do. For example, it is hard to verify a safety critical system without either proving it correct or testing it in a safety-critical context (which isn't safe). Take driverless cars, for example: for Google to test their software, they've had to set up thousands of paid drivers to monitor and report problems on the road. In contrast, verifying that a simple static HTML web page works correctly is as simple as opening it in a browser.
* *Maintainability* is the effort required to correct, adapt, or perfect software. This depends mostly on how comprehensible and modular an implementation is.
* *Reusability* is the effort required to use a program's components for purposes other than those for which it was originally designed. APIs are reusable by definition, whereas black box embedded software (like the software built into a car's traction systems) is not.
Other qualities are concerned with the use of the software in the world by people:
* *Learnability* is the easy with which a person can learn to operate software. Learnability is multi-dimensional and can be difficult to measure, including aspects of usability, expectations of prior knowledge, reliance on conventions, error proneness, and task alignment <grossman09>.
* *User efficiency* is the speed with which a person can perform tasks with a program. For example, think about the speed with which you can navigate back to the table of contents of this book. Obviously, because most software supports many tasks, user efficiency isn't a single property of software, but one that varies depending on the task.
* *Accessibility* is the extent to which people with varying cognitive and motor abilities can operate the software as intended. For example, software that can only be used with a mouse is less accessible than something that can be used with a mouse, keyboard, or speech recognition. Software can be designed for all abilities, and even automatically adapted for individual abilities <wobbrock11>.
* *Privacy* is the extent to which a system prevents access to information that intended for a particular audience or use. To achieve privacy, a system must be secure; for example, if anyone could log into your Facebook account, it would be insecure, and thus have poor privacy preservation. However, a secure system is not necessarily private: Facebook works hard on security, but shares immense amounts of private data with third parties, often without informed consent.
* *Consistency* is the extent to which related functionality in a system leverages the same skills, rather than requiring new skills to learn how to use. For example, in Mac OS, quitting any application requires the same action: command-Q or the Quit menu item in the application menu; this is highly consistent. Other platforms that are less consistent allow applications to have many different ways of quitting applications.
* *Usability* is an aggregate quality that encompasses all of the qualities above. It is used holistically to refer to all of of those factors. Because it is not very precise, it is mostly useful in casual conversation about software, but not as useful in technical conversations about software quality.
* *Bias* is the extent to which software discriminates or excludes on the basis of some aspect of its user, either directly, or by amplifying or reinforcing discriminatory or exclusionary structures in society. For example, data used to train a classifier might used racially biased data, algorithms might use sexist assumptions about gender, web forms might systematically exclude non-Western names and language, and applications might be only accessible to people who can see or use a mouse. Inaccessibility is a form of bias.
* *Usefulness* is the extent to which software is of value to its various stakeholders. Utility is often the _most_ important quality because it subsumes all of the other lower-level qualities software can have (e.g., part of what makes a messaging app useful is that it's performant, user efficient, and reliable). That also makes it less useful as a concept, because it encompasses so many things. That said, usefulness is not always the most important quality. For example, if you can sell a product to a customer and get a one time payment of their money, it might not matter--at least to a for-profit venture--that the product has low usefulness.
Although the lists above are not complete, you might have already noticed some tradeoffs between different qualities. A secure system is necessarily going to be less learnable, because there will be more to learn to operate it. A robust system will likely be less maintainable because it it will likely have more code to account for its diverse operating environments. Because one cannot achieve all software qualities, and achieving each quality takes significant time, it is necessary to prioritize qualities for each project.
These external notions of quality are not the only qualities that matter. For example, developers often view projects as successful if they offer intrinsically rewarding work <procaccino05>. That may sound selfish, but if developers _aren't_ enjoying their work, they're probably not going to achieve any of the qualities very well. Moreover, there are many organizational factors that can inhibit developers' ability to obtain these rewards. Project complexity, internal and external dependencies that are out of a developers control, process barriers, budget limitations, deadlines, poor HR planning, and pressure to ship can all interfere with project success <lavallee15>.
As I've noted before, the person most responsible for isolating developers from these organizational problems, and most responsible for prioritizing software qualities is a product manager. Check out the podcast below for one product manager's perspectives on the challenges of balancing these different priorities.

41
chapters/requirements.md Normal file
View file

@ -0,0 +1,41 @@
Once you have a problem, a solution, and a design specification, it's entirely reasonable to start thinking about code. What libraries should we use? What platform is best? Who will build what? After all, there's no better way to test the feasibility of an idea than to build it, deploy it, and find out if it works. Right?
It depends. This mentality towards product design works fine if building and deploying something is cheap and getting feedback has no consequences. Simple consumer applications often benefit from this simplicity, especially early stage ones, because there's little to lose. For example, if you are starting a company, and do not even know if there is a market opportuniity yet, it may be worth quickly prototyping an idea, seeing if there's interesting, and then later thinking about how to carefully architect a product that meets that opportunity. This is [how products such as Facebook started|https://en.wikipedia.org/wiki/History_of_Facebook], with a poorly implemented prototype that revealed an opportunity, which was only later translated into a functional, reliable software service.
However, what if prototyping a beta _isn't_ cheap to build? What if your product only has one shot at adoption? What if you're building something for a client and they want to define success? Worse yet, what if your product could _kill_ people if it's not built properly? Consider the [U.S. HealthCare.gov launch|https://en.wikipedia.org/wiki/HealthCare.gov], for example, which was lambasted for is countless defects and poor scalability at launch, only working for 1,100 simultaneous users, when 50,000 were exected and 250,000 actually arrived. To prevent disastrous launches like this, software teams have to be more careful about translating a design specification into a specific explicit set of goals that must be satisfied in order for the implementation to be complete. We call these goals *requirements* and we call this process of *requirements engineering* <sommerville97>.
In principle, requirements are a relatively simple concept. They are simply statements of what must be true about a system to make the system acceptable. For example, suppose you were designing an interactive mobile game. You might want to write the requirement _The frame rate must never drop below 60 frames per second._ This could be important for any number of reasons: the game may rely on interactive speeds, your company's reputation may be for high fidelity graphics, or perhaps that high frame rate is key to creating a sense of realism. Or, imagine your game company has a reputation for high performance, high fidelity graphics, high frame rate graphics, and achieving any less would erode your company's brand. Whatever the reasons, expressing it as a requirement makes it explicit that any version of the software that doesn't meet that requirement is unacceptable, and sets a clear goal for engineering to meet.
The general idea of writing down requirements is actually a controversial one. Why not just discover what a system needs to do incrementally, through testing, user feedback, and other methods? Some of the original arguments for writing down requirements actually acknowledged that software is necessarily built incrementally, but that it is nevertheless useful to write down requirements from the outset <parnas86>. This is because requirements help you plan everything: what you have to build, what you have to test, and how to know when you're done. The theory is that by defining requirements explicitly, you plan, and by planning, you save time.
Do you really have to plan by _writing down_ requirements? For example, why not do what designers do, expressing requirements in the form of prototypes and mockups. These _implicitly_ state requirements, because they suggest what the software is supposed to do without saying it directly. But for some types of requirements, they actually imply nothing. For example, how responsive should a web page be to be? A prototype doesn't really say; an explicit requirement of _an average page load time of less than 1 second_ is quite explicit. Requirements can therefore be thought of more like an architect's blueprint: they provide explicit definitions and scaffolding of project success.
And yet, like design, requirements come from the world and the people in it and not from software <jackson01>. Because they come from the world, requirements are rarely objective or unambiguous. For example, some requirements come from law, such as the European Union's General Data Protection Regulation [GDPR|https://eugdpr.org/] regulation, which specifies a set of data privacy requirements that all software systems used by EU citizens must meet. Other requirements might come from public pressure for change, as in Twitter's decision to label particular tweets as having false information or hate speech. Therefore, the methods that people use to do requirements engineering are quite diverse. Requirements engineers may work with lawyers to interpret policy. They might work with regulators to negotiate requirements. They might also use design methods, such as user research methods and rapid prototyping to iteratively converge toward requirements <lamsweerde08>. Therefore, the big difference between design and requirements engineering is that requirements engineers take the process one step further than designers, enumerating _in detail_ every property that the software must satisfy, and engaging with every source of requirements a system might need to meet, not just user needs.
There are some approaches to specifying requirements _formally_. These techniques allow requirements engineers to automatically identify _conflicting_ requirements, so they don't end up proposing a design that can't possibly exist. Some even use systems to make requirements _traceable_, meaning the high level requirement can be linked directly to the code that meets that requirement <mader15>. All of this formality has tradeoffs: not only does it take more time to be so precise, but it can negatively effect creativity in concept generation as well <mohanani14>.
Expressing requirements in natural language can mitigate these effects, at the expense of precision. They just have to be *complete*, *precise*, *non-conflicting*, and *verifiable*. For example, consider a design for a simple to do list application. It's requirements might be something like the following:
* Users must be able to add to do list items with a single action.
* To do list items must consist of text and a binary completed state.
* Users must be able to edit to do list item text.
* Users must be able to toggle the completed state.
* Users must be able to delete to do list items.
* All edits to do list item state must save without user intervention.
Let's review these requirements against the criteria for good requirements that I listed above:
* Is it *complete*? I can think of a few more requirements: is the list ordered? How long does state persist? Are there user accounts? Where is data stored? What does it look like? What kinds of user actions must be supported? Is delete undoable? Even just on these completeness dimension, you can see how even a very simple application can become quite complex. When you're generating requirements, your job is to make sure you haven't forgotten important requirements.
* Is the list *precise*? Not really. When you add a to do list item, is it added at the beginning? The end? Wherever a user request it be added? How long can the to do list item text be? Clearly the requirement above is imprecise. And imprecise requirements lead to imprecise goals, which means that engineers might not meet them. Is this to do list team okay with not meeting it's goals?
* Are the requirements *non-conflicting*? I _think_ they are since they all seem to be satisfiable together. But some of the missing requirements might conflict. For example, suppose we clarified the imprecise requirement about where a to do list item is added. If the requirement was that it was added to the end, is there also a requirement that the window scroll to make the newly added to do item visible? If not, would the first requirement of making it possible for users to add an item with a single action be achieveable? They could add it, but they wouldn't know they had added it because of this usability problem, so is this requirement met? This example shows that reasoning through requirements is ultimately about interpreting words, finding source of ambiguity, and trying to eliminate them with more words.
* Finally, are they *verifiable*? Some more than others. For examplke, is there a way to guarantee that the state saves successfully all the time? That may be difficult to prove given the vast number of ways the operating environment might prevent saving, such as a failing hard drive or an interrupted internet connection. This requirement might need to be revised to allow for failures to save, which itself might have implications for other requirements in the list.
Now, the flaws above don't make the requirements "wrong". They just make them "less good." The more complete, precise, non-conflicting, and testable your requirements are, the easier it is to anticipate risk, estimate work, and evaluate progress, since requirements essentially give you a to do list for implementation and testing.
Lastly, remember that requirements are translated from a design, and designs have many more qualities than just completeness, preciseness, feasibility, and verifiability. Designs must also be legal, ethical, and just. Consider, for example, the anti-Black redlining practices pervasive throughout the United States. Even through the 1980's, it was standard practice for banks to lend to lower-income white residents, but not Black residents, even middle-income or upper-income ones. Banks in the 1980's wrote software to automate many lending decisions; would a software requirement such as this have been legal, ethical, or just?
"
No loan application with an applicant self-identified as Black should be approved.
"
That requirement is both precise and verifiable. In the 1980's, it was legal. But it was definitely not ethical or just? No. Therefore, requirements, no matter how formally extracted from a design specification, no matter how consistent with law, and no matter how aligned with an organization's priorities, is free from racist ideas. Requirements are just one of many ways that such ideas are manifested, and ultimately hidden in code <benjamin19>.

View file

@ -0,0 +1,65 @@
When you make something with code, you're probably used to figuring out a design as you go. You write a function, you choose some arguments, and if you don't like what you see, perhaps you add a new argument to that function and test again. This [cowboy coding|https://en.wikipedia.org/wiki/Cowboy_coding] as some people like to call it can be great fun! It allows systems to emerge more organically, as you iteratively see your front-end design emerge, the design of your implementation emerges too, co-evolving with how you're feeling about the final product.
As you've probably noticed by now, this type of process doesn't really scale, even when you're working with just a few other people. That argument you added? You just broke a bunch of functions one of your teammates was planning and when she commits her code, now she gets merge conflicts, which cost her an hour to fix because she has to catch up to whatever design change you made. This lack of planning quickly turns into an uncoordinated mess of individual decision making. Suddenly you're spending all of your time cleaning up coordination messes instead of writing code.
The techniques we've discussed so far for avoiding this boil down to _specifying_ what code should do, so everyone can write code according to a plan. We've talked about requirements.html": "requirements specifications, which are declarations of what software must do from a users' perspective. We've also talked about architecture.html": "architectural specifications, which are high-level declarations of how code will be organized, encapsulated, and coordinated. At the lowest level are *functional specifications*, which are declarations about the _properties of input and output of functions in a program_.
In their simplest form, a functional specification can be a just some natural language that says what an individual function is supposed to do:
`
// Return the smaller of the two numbers,
// or if they're equal, the second number.
function min(a, b) {
return a < b ? a : b;
}
`
This comment achieves the core purpose of a specification: to help other developers understand what the requirements and intended behavior of a function are. As long as everyone sticks to this "plan" (everyone calls the function with only numbers and the function always returns the smaller of them), then there shouldn't be any problems.
The comment above is okay, but it's not very precise. It says what is returned and what properties it has, but it only implies that numbers are allowed without saying anything about what kind of numbers. Are decimals allowed or just integers? What about not-a-number (the result of dividing 1 by 0). Or infinity?
To make these clearer, many languages use *static typing* to allow developers to specify types explicitly:
`
// Return the smaller of the two integers, or if they're equal, the second number.
function min(int a, int b) {
return a < b ? a : b;
}
`
Because an `int` is well-defined in most languages, the two inputs to the function are well-defined.
Of course, if the above was JavaScript code (which doesn't support static typing), JavaScript does nothing to actually verify that the data given to `min()` are actually integers. It's entirely fine with someone sending a string and an object. This probably won't do what you intended, leading to defects.
This brings us to a second purpose of writing functional specifications: to help _verify_ that functions, their input, and their output are correct. Tests of functions and other low-level procedures are called *unit tests*. There are many ways to use specifications to verify correctness. By far, one of the simplest and most widely used kinds of unit tests are *assertions* <clarke06>. Assertions consist of two things: 1) a check on some property of a function's input or output and 2) some action to notify about violations of these properties. For example, if we wanted to verify that the JavaScript function above had integer values as inputs, we would do this:
`
// Return the smaller of the two numbers, or if they're equal, the second number.
function min(a, b) {
if(!Number.isInteger(a))
alert("First input to min() isn't an integer!");
if(!Number.isInteger(b))
alert("Second input to min() isn't an integer!");
return a < b ? a : b;
}
`
These two new lines of code are essentially functional specifications that declare "_If either of those inputs is not an integer, the caller of this function is doing something wrong_". This is useful to declare, but assertions have a bunch of problems: if your program _can_ send a non-integer value to min, but you never test it in a way that does, you'll never see those alerts. This form of *dynamic verification* is therefore very limited, since it provides weaker guarantees about correctness. That said, a study of the use of assertions in a large database of GitHub projects shows that use of assertions _is_ related to fewer defects <casalnuovo15-2> (though note that I said "related": we have no evidence that assertions actually prevent defects. It may be possible that developers who use assertions are just better at avoiding defects.)
Assertions are related to the broader category of *error handling* language features. Error handling includes assertions, but also programming language features like exceptions and exception handlers. Error handling is a form of specification in that _checking_ for errors usually entails explicitly specifying the conditions that determine an error. For example, in the code above, the condition `Number.isInteger(a)` specifies that the parameter `a` must be an integer. Other exception handling code such as the Java `throws` statement indicates the cases in which errors can occur and the corresponding `catch` statement indicates what is to done about errors. It is difficult to implement good exception handling that provides granular, clear ways of recovering from errors <chen09>. Evidence shows that modern developers are still exceptionally bad at designing for errors; one study found that errors are not designed for, few errors are tested for, and exception handling is often overly general, providing little ability for users to understand errors or for developers to debug them <ebert15>. These difficulties appear to be because it is difficult to imagine the vast range of errors that can occur <maxion00>.
Researchers have invented many forms of specification that require more work and more thought to write, but can be used to make stronger guarantees about correctness <woodcock09>. For example, many languages support the expression of formal *pre-conditions* and *post-conditions* that represent contracts that must be kept for the program to be corect. (*Formal* means mathematical, facilitating mathematical proofs that these conditions are met). Because these contracts are essentially mathematical promises, we can build tools that automatically read a function's code and verify that what it computes exhibits those mathematical properties using automated theorem proving systems. For example, suppose we wrote some formal specifications for our example above to replace our assertions (using a fictional notation for illustration purposes):
`
// pre-conditions: a in Integers, b in Integers
// post-conditions: result <= a and result <= b
function min(a, b) {
return a < b ? a : b;
}
`
The annotations above require that, no matter what, the inputs have to be integers and the output has to be less than or equal to both values. The automatic theorem prover can then start with the claim that result is always less than or equal to both and begin searching for a counterexample. Can you find a counterexample? Really try. Think about what you're doing while you try: you're probably experimenting with different inputs to identify arguments that violate the contract. That's similar to what automatic theorem provers do, but they use many tricks to explore large possible spaces of inputs all at once, and they do it very quickly.
There are definite tradeoffs with writing detailed, formal specifications. The benefits are clear: many companies have written formal functional specifications in order to make _completely_ unambiguous the required behavior of their code, particularly systems that are capable of killing people or losing money, such as flight automation software, banking systems, and even compilers that create executables from code <woodcock09>. In these settings, it's worth the effort of being 100% certain that the program is correct because if it's not, people can die. Specifications can have other benefits. The very act of writing down what you expect a function to do in the form of test cases can slow developers down, causing to reflect more carefully and systematically about exactly what they expect a function to do <fucci16>. Perhaps if this is true in general, there's value in simply stepping back before you write a function, mapping out pre-conditions and post-conditions in the form of simple natural language comments, and _then_ writing the function to match your intentions.
Writing formal specifications can also have downsides. When the consequences of software failure aren't so high, the difficulty and time required to write and maintain functional specifications may not be worth the effort <petre13>. These barriers deter many developers from writing them <schiller14>. Formal specifications can also warp the types of data that developers work with. For example, it is much easier to write formal specifications about Boolean values and integers than string values. This can lead engineers to be overly reductive in how they model data (e.g., settling for binary models of gender, then gender is inherently non-binary and multidimensional).

63
chapters/verification.md Normal file
View file

@ -0,0 +1,63 @@
How do you know a program does what you intended?
Part of this is being clear about what you intended (by writing [specifications|specifications], for example), but your intents, however clear, are not enough: you need evidence that your intents were correctly expressed computationally. To get this evidence, we do *verification*.
There are many ways to verify code. A reasonable first instinct is to simply run your program. After all, what better way to check whether you expressed your intents then to see with your own eyes what your program does? This is an empirical approach is called *testing*. Some testing is _manual_, in that a human executes a program and verifies that it does what was intended. Some testing is _automated_, in that the test is run automatically by a computer. Another way to verify code is to *analyze* it, using logic to verify its correct operation. As with testing, some analysis is _manual_, since humans do it. We call this manual analysis _inspection_, whereas other analysis is _automated_, since computers do it. We call this _program analysis_. This leads to a nice complementary set of verification technique along two axes: degree of automation and type of verification:
* Manual techniques include *manual testing* (which is empirical) and *inspections* (which is analytical)
* Automated technqiues include *automated testing* (which is empirical) and *program analysis* (which is analytical)
To discuss each of these and their tradeoffs, first we have to cover some theory about verification. The first and simplest ideas are some terminology:
* A *defect* is some subset of a program's code that exhibits behavior that violates a program's specifications. For example, if a program was supposed to sort a list of numbers in increasing order and print it to a console, but a flipped inequality in the sorting algorithm made it sort them in decreasing order, the flipped inequality is the defect.
* A *failure* is the program behavior that results from a defect executing. In our sorting example, the failure is the incorrectly sorted list printed on the console.
* A *bug* vaguely refers to either the defect, the failure, or both. When we say "bug", we're not being very precise, but it is a popular shorthand for a defect and everything it causes.
Note that because defects are defined relative to _intent_, whether a behavior is a failure depends entirely the definition of intent. If that intent is vague, whether something is a defect is vague. Moreover, you can define intents that result in behaviors that seem like failures: for example, I can write a program that intentionally crashes. A crash isn't a failure if it was intended! This might be pedantic, but you'd be surprised how many times I've seen professional developers in bug triage meetings say:
_"Well, it's worked this way for a long time, and people have built up a lot of workarounds for this bug. It's also really hard to fix. Let's just call this by design. Closing this bug as won't fix."_
# Testing
So how do you _find_ defects in a program? Let's start with testing. Testing is generally the easiest kind of verification to do, but as a practice, it has questionable efficacy. Empirical studies of testing find that it _is_ related to fewer defects in the future, but not strongly related, and it's entirely possible that it's not the testing itself that results in fewer defects, but that other activities (such as more careful implementation) result in fewer defects and testing efforts <ahmed16>. At the same time, modern developers don't test as much as they think they do <beller15>. Moreover, students are often not convinced of the return on investment of automated tests and often opt for laborious manual tests (even though they regret it later) <pham14>. Testing is therefore in a strange place: it's a widespread activity in industry, but it's often not executed systematically, and there is some evidence that it doesn't seem to help prevent defects from being released.
Why is this? One possibility is that *no amount of testing can prove a program correct with respect to its specifications*. Why? It boils down to the same limitations that exist in science: with empiricism, we can provide evidence that a program _does_ have defects, but we can't provide complete evidence that a program _doesn't_ have defects. This is because even simple programs can execute in a infinite number of different ways.
Consider this JavaScript program:
`
function count(input) {
while(input > 0)
input--;
return input;
}
`
The function should always return 0, right? How many possible values of `input` do we have to try manually to verify that it always does? Well, if `input` is an integer, then there are 2 to the power 32 possible integer values, because JavaScript uses 32-bits to represent an integer. That's not infinite, but that's a lot. But what if `input` is a string? There are an infinite number of possible strings because they can have any sequence of characters of any length. Now we have to manually test an infinite number of possible inputs. So if we were restricting ourselves to testing, we will never know that the program is correct for all possible inputs. In this case, automatic testing doesn't even help, since there are an infinite number of tests to run.
There are some ideas in testing that can improve how well we can find defects. For example, rather than just testing the inputs you can think of, focus on all of the lines of code in your program. If you find a set of tests that can cause all of the lines of code to execute, you have one notion of *test coverage*. Of course, lines of code aren't enough, because an individual line can contain multiple different paths in it (e.g., `value ? getResult1() : getResult2()`). So another notion of coverage is executing all of the possible _control flow paths_ through the various conditionals in your program. Executing _all_ of the possible paths is hard, of course, because every conditional in your program doubles the number of possible paths (you have 200 if statements in your program? That's up to 2 to the power 200 possible paths, which is more paths than there are [atoms in the universe|https://en.wikipedia.org/wiki/Observable_universe#Matter_content].
There are many types of testing that are common in software engineering:
* *Unit tests* verify that functions return the correct output. For example, a program that implemented a function for finding the day of the week for a given date might also include unit tests that verify for a large number of dates that the correct day of the week is returned. They're good for ensuring widely used low-level functionality is correct.
* *Integration tests* verify that when all of the functionality of a program is put together into the final product, it behaves according to specifications. Integration tests often operate at the level of user interfaces, clicking buttons, entering text, submitting forms, and verifying that the expected feedback always occurs. Integration tests are good for ensuring that important tasks that users will perform are correct.
* *Regression tests* verify that behavior that previously worked doesn't stop working. For example, imagine you find a defect that causes logins to fail; you might write a test that verifies that this cause of login failure does not occur, in case someone breaks the same functionality again, even for a different reason. Regression tests are good for ensuring that you don't break things when you make changes to your application.
Which tests you should write depends on what risks you want to take. Don't care about failures? Don't write any tests. If failures of a particular kind are highly consequential to your team, you should probably write tests that check for those failures. As we noted above, you can't write enough tests to catch all bugs, so deciding which tests to write and maintain is a key challenge.
# Analysis
Now, you might be thinking that it's obvious that the program above is defective for some integers and strings. How did you know? You _analyzed_ the program rather than executing it with specific inputs. For example, when I read (analyzed) the program, I thought:
_"if we assume `input` is an integer, then there are only three types of values to meaningfully consider with respect to the `>` in the loop condition: positive, zero, and negative. Positive numbers will always decrement to 0 and return 0. Zero will return zero. And negative numbers just get returned as is, since they're less then zero, which is wrong with respect to the specification. And in JavaScript, strings are never greater than 0 (let's not worry about whether it even makes sense to be able to compare strings and numbers), so the string is returned, which is wrong."_
The above is basically an informal proof. I used logic to divide the possible states of `input` and their effect on the program's behavior. I used *symbolic execution* to verify all possible paths through the function, finding the paths that result in correct and incorrect values. The strategy was an inspection because we did it manually. If we had written a _program_ that read the program to perform this proof automatically, we would have called it _program analysis_.
The benefits of analysis is that it _can_ demonstrate that a program is correct in all cases. This is because they can handle infinite spaces of possible inputs by mapping those infinite inputs onto a finite space of possible executions. It's not always possible to do this in practice, since many kinds of programs _can_ execute in infinite ways, but it gets us closer to proving correctness.
One popular type of automatic program analysis tools is a *static analysis* tool. These tools read programs and identify potential defects using the types of formal proofs like the ones above. They typically result in a set of warnings, each one requiring inspection by a developer to verify, since some of the warnings may be false positives (something the tool thought was a defect, but wasn't). Although static analysis tools can find many kinds of defects, they aren't yet viewed by developers to be that useful because the false positives are often large in number and the way they are presented make them difficult to understand <johnson13>. There is one exception to this, and it's a static analysis tool you've likely used: a compiler. Compilers verify the correctness of syntax, grammar, and for statically-typed languages, the correctness of types. As I'm sure you've discovered, compiler errors aren't always the easiest to comprehend, but they do find real defects automatically. The research community is just searching for more advanced ways to check more advanced specifications of program behavior.
Not all analytical techniques rely entirely on logic. In fact, one of the most popular methods of verification in industry are *code reviews*, also known as _inspections_. The basic idea of an inspection is to read the program analytically, following the control and data flow inside the code to look for defects. This can be done alone, in groups, and even included as part of process of integrating changes, to verify them before they are committed to a branch. Modern code reviews, while informal, help find defects, stimulate knowledge transfer between developers, increase team awareness, and help identify alternative implementations that can improve quality <bacchelli13>. One study found that measures of how much a developer knows about an architecture can increase 66% to 150% depending on the project <rigby13>. That said, not all reviews are created equal: the best ones are thorough and conducted by a reviewer with strong familiarity with the code <kononenko16>; including reviewers that do not know each other or do not know the code can result in longer reviews, especially when run as meetings <seaman97>. Soliciting reviews asynchronously by allowing developers to request reviewers of their peers is generally much more scalable <rigby11>, but this requires developers to be careful about which reviews they invest in. These choices about where to put reviewing attention can result in great disparities in what is reviewed, especially in open source: the more work a review is perceived to be, the less likely it is to be reviewed at all and the longer the delays in receiving a review <thongtanunam16>.
Beyond these more technical considerations around verifying a program's correctness are organizational issues around different software qualities. For example, different organizations have different sensitivities to defects. If a $0.99 game on the app store has a defect, that might not hurt its sales much, unless that defect prevents a player from completing the game. If Boeing's flight automation software has a defect, hundreds of people might die. The game developer might do a little manual play testing, release, and see if anyone reports a defect. Boeing will spend years proving mathematically with automatic program analysis that every line of code does what is intended, and repeating this verification every time a line of code changes. Moreover, requirements may change differently in different domains. For example, a game company might finally recognize the sexist stereotypes amplified in its game mechanics and have to change requirements, resulting in changed definitions of correctness, and the incorporation of new software qualities such as bias into testing plans. Similarly, Boeing might have to respond to pandemic fears by having to shift resources away from verifying flight crash safety to verifying public health safety. What type of verification is right for your team depends entirely on what a team is building, who's using it, and how they're depending on it.

View file

@ -1,143 +1,6 @@
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Bootstrap requires jQuery -->
<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<!-- Load some Lora -->
<link href="https://fonts.googleapis.com/css2?family=Lora:ital,wght@0,400;0,700;1,400;1,700&display=swap" rel="stylesheet">
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<!-- Optional theme -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap-theme.min.css" integrity="sha384-rHyoN1iRsVXV4nD0JutlnGaslCJuC7uwjduW9SVrLvRYooPp2bWYgmgJQIXwl/Sp" crossorigin="anonymous">
<!-- Latest compiled and minified JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<link rel="stylesheet" href="style.css" />
<!-- UPDATE -->
<title>Communication</title>
<meta http-equiv="refresh" content="0; URL=http://faculty.uw.edu/ajko/books/cooperative-software-development/#/communication" />
</head>
<body>
<p><a href="index.html">Back to table of contents</a></p>
<img src="images/communication.png" class="img-responsive" />
<small>Credit: public domain</small>
<h1>Communication</h1>
<div class="lead">Amy J. Ko</div>
<p>
Because software engineering often times distributes work across multiple people, a fundamental challenge in software engineering is ensuring that everyone on a team has the same understanding of what is being built and why.
In the seminal book &ldquo;The Mythical Man Month&rdquo;, Fred Brooks argued that good software needs to have <strong>conceptual integrity</strong>, both in how it is designed, but also how it is implemented (<a href="#brooks">Brooks 1995</a>).
This is the idea that whatever vision of what is being built must stay intact, even as the building of it gets distributed to multiple people.
When multiple people are responsible for implementing a single coherent idea, how can they ensure they all build the same idea?
</p>
<p>
The solution is effective communication.
As <a href="https://www.nytimes.com/2017/08/12/upshot/techs-damaging-myth-of-the-loner-genius-nerd.html" target="_blank">some events in industry have shown</a>, communication requires empathy and teamwork.
When communication is poor, teams become disconnected and produce software defects (<a href="#bettenburg">Bettenburg & Hassan 2013</a>).
Therefore, achieving effective communication practices is paramount.
</p>
<p>
It turns out, however, that communication plays such a powerful role in software projects that it even shapes how projects unfold.
Perhaps the most notable theory about the effect of communication is Conway's Law <a href="#conway">(Conway 1968)</a>.
This theory argues that any designed system&mdash;software included&mdash;will reflect the communication structures involved in producing it.
For example, think back to any course project where you divided the work into chunks and tried to combine them together into a final report at the end.
The report and its structure probably mirrored the fact that several distinct people worked on each section of the report, rather than sounding like a single coherent voice.
The same things happen in software: if the team writing error messages for a website isn't talking to the team presenting them, you're probably going to get a lot of error messages that aren't so clear, may not fit on screen, and may not be phrased using the terminology of the rest of the site.
On the other hand, if those two teams meet regularly to design the error mesages together, communicating their shared knowledge, they might produce a seamless, coherent experience.
Not only does software follow this law when a project is created, they also follow this law as projects evolve over time <a href="#zhou">(Zhou & Mockus 2011)</a>.
</p>
<p>Because communication is so central, software engineers are constantly seeking information to further their work, going to their coworkers' desks, emailing them, chatting via messaging platforms, and even using social media <a href="#ko">(Ko et al. 2007)</a>. Some of the information that developers are seeking is easier to find than others. For example, in the study I just cited, it was pretty trivial to find information about how wrote a line of code or whether a build was done, but when the information they needed resided in someone else's head (e.g., <em>why</em> a particular line of code was written), it was slow or often impossible to retrieve it. Sometimes it's not even possible to find out who has the information. Researchers have investigated tools for trying to quantify expertise by automatically analyzing the code that developers have written, building platforms to help developers search for other developers who might know what they need to know (<a href="#mockusherbsleb">Mockus & Herbsleb 2002</a>, <a href="#begel">Begel et al. 2010</a>).</p>
<p>Communication is not always effective. In fact, there are many kinds of communication that are highly problematic in software engineering teams. For example, Perlow (<a hef="perlow">1999</a>) conducted an <a href="https://en.wikipedia.org/wiki/Ethnography" target="_blank">ethnography</a> of one team and found a highly dysfunctional use of interruptions in which the most expert members of a team were constantly interrupted to &ldquo;fight fires&rdquo; (immediately address critical problems) in other parts of the organization, and then the organization rewarded them for their heroics. This not only made the most expert engineers less productive, but it also disincentivized the rest of the organization to find effective ways of <em>preventing</em> the disasters from occurring in the first place. Not all interruptions are bad, and they can increase productivity, but they do increase stress (<a href="#mark">Mark et al. 2008</a>).</p>
<p>Communication isn't just about transmitting information; it's also about relationships and identity. For example, the dominant culture of many software engineering work environments&mdash;and even the <em>perceived</em> culture&mdash;is one that can deter many people from even pursuing careers in computer science. Modern work environments are still dominated by men, who speak loudly, out of turn, and disrespectfully, with <a href="https://www.susanjfowler.com/blog/2017/2/19/reflecting-on-one-very-strange-year-at-uber">some even bordering on sexual harassment</a>. Computer science as a discipline, and the software industry that it shapes, has only just begun to consider the urgent need for <em>cultural competence</em> (the ability for individuals and organizations to work effectively when their employee's thoughts, communications, actions, customs, beliefs, values, religions, and social groups vary) (<a href="#washington">Washington, 2020</a>). Similarly, software developers often have to work with people in other domains such as artists, content developers, data scientists, design researchers, designers, electrical engineers, mechanical engineers, product planners, program managers, and service engineers. One study found that developers' cross-disciplinary collaborations with people in these other domains required open-mindedness about the input of others, proactively informing everyone about code-related constraints, and ultimately seeing the broader picture of how pieces from different disciplines fit together; when developers didn't do these things, collaborations failed, and therefore projects failed (<a href="#li">Li et al. 2017</a>). These are not the conditions for trusting, effective communication.</p>
<p>
When communication is effective, it still takes time.
One of the key strategies for reducing the amount of communication necessary is <em>knowledge sharing</em> tools, which broadly refers to any information system that stores facts that developers would normally have to retrieve from a person.
By storing them in a database and making them easy to search, teams can avoid interruptions.
The most common knowledge sharing tools in software teams are issue trackers, which are often at the center of communication not only between developers, but also with every other part of a software organization (<a href="#bertram">Bertram et al. 2010</a>).
Community portals, such as GitHub pages or Slack teams, can also be effective ways of sharing documents and archiving decisions (<a href="#treudestory1">Treude & Storey 2011</a>).
Perhaps the most popular knowledge sharing tool in software engineering today is <a href="https://stackoverflow.com">Stack Overflow</a>, which archives facts about programming language and API usage.
Such sites, while they can be great resources, have the same problems as many media, such as gender bias that prevent contributions from women from being rewarded as highly as contributions from men (<a href="#may">May et al. 2019</a>).
</p>
<p>Because all of this knowledge is so critical to progress, when developers leave an organization and haven't archived their knowledge somewhere, it can be quite disruptive to progress. Organizations often have single points of failure, in which a single developer may be critical to a team's ability to maintain and enhance a software product (<a href="#rigby">Rigby et al. 2016</a>). When newcomers join a team and lack the right knowledge, they introduce defects (<a href="#foucault">Foucault et al. 2015</a>). Some companies try to mitigate this by rotating developers between projects, &ldquo;cross-training&rdquo; them to ensure that the necessary knowledge to maintain a project is distributed across multiple engineers.</p>
<p>What does all of this mean for you as an individual developer? To put it simply, don't underestimate the importance of talking. Know who you need to talk to, talk to them frequently, and to the extent that you can, write down what you know both to lessen the demand for talking and mitigate the risk of you not being available, but also to make your knowledge more precise and accessible in the future. It often takes decades for engineers to excel at communication. The very fact that you know why communication is important gives you an critical head start.</p>
<center class="lead"><a href="productivity.html">Next chapter: Productivity</a></center>
<h2>Further reading</h2>
<small>
<p>Salah Bendifallah and Walt Scacchi. 1989. <a href="http://dx.doi.org/10.1145/74587.74624" target="_blank">Work structures and shifts: an empirical analysis of software specification teamwork</a>. In Proceedings of the 11th international conference on Software engineering (ICSE '89). ACM, New York, NY, USA, 260-270.</p>
<p id="begel">Andrew Begel, Yit Phang Khoo, and Thomas Zimmermann. 2010. <a href="http://dx.doi.org/10.1145/1806799.1806821" target="_blank">Codebook: discovering and exploiting relationships in software repositories</a>. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1 (ICSE '10), Vol. 1. ACM, New York, NY, USA, 125-134.</p>
<p id="bertram">Dane Bertram, Amy Voida, Saul Greenberg, and Robert Walker. 2010. <a href="http://dx.doi.org/10.1145/1718918.1718972" target="_blank">Communication, collaboration, and bugs: the social nature of issue tracking in small, collocated teams</a>. In Proceedings of the 2010 ACM conference on Computer supported cooperative work (CSCW '10). ACM, New York, NY, USA, 291-300.</p>
<p id="bettenburg">Bettenburg, N., & Hassan, A. E. (2013). <a href="https://doi.org/10.1007/s10664-012-9205-0" target="_blank">Studying the impact of social interactions on software quality</a>. Empirical Software Engineering, 18(2), 375-431.</p>
<p id="brooks">Brooks, F.B. (1995). <a href="http://dl.acm.org/citation.cfm?id=207583" target="_blank">The Mythical Man-Month: Essays on Software Engineering</a>, Addison-Wesley.</p>
<p id="conway">Conway, M. E. (1968). <a href="http://michaelsaunders.com.au/wp-content/uploads/2016/10/Conway-Man.pdf" target="_blank">How do committees invent</a>. Datamation, 14(4), 28-31.</p>
<p>Torgeir Dings&oslashyr and Emil R&oslashyrvik. 2003. <a href="http://dl.acm.org/citation.cfm?id=776827" target="_blank">An empirical study of an informal knowledge repository in a medium-sized software consulting company</a>. In Proceedings of the 25th International Conference on Software Engineering (ICSE '03). IEEE Computer Society, Washington, DC, USA, 84-92.</p>
<p id="foucault">Matthieu Foucault, Marc Palyart, Xavier Blanc, Gail C. Murphy, and Jean-R&eacutemy Falleri. 2015. <a href="https://doi.org/10.1145/2786805.2786870" target="_blank">Impact of developer turnover on quality in open-source software</a>. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). ACM, New York, NY, USA, 829-841.</p>
<p id="ko">Amy J. Ko, Robert DeLine, and Gina Venolia. 2007. <a href="http://dx.doi.org/10.1109/ICSE.2007.45" target="_blank">Information Needs in Collocated Software Development Teams</a>. In Proceedings of the 29th international conference on Software Engineering (ICSE '07). IEEE Computer Society, Washington, DC, USA, 344-353.</p>
<p id="li">Li, P. L., Ko, A. J., & Begel, A. (2017, May). <a href="http://dl.acm.org/citation.cfm?id=3100319">Cross-disciplinary perspectives on collaborations with software engineers</a>. In Proceedings of the 10th International Workshop on Cooperative and Human Aspects of Software Engineering (pp. 2-8).</p>
<p id="mark">Mark, G., Gudith, D., & Klocke, U. (2008, April). <a href="http://dl.acm.org/citation.cfm?id=1357072" target="_blank">The cost of interrupted work: more speed and stress</a>. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems (pp. 107-110).</p>
<p id="may">May, A., Wachs, J., & Hannák, A. (2019). <a href="https://link.springer.com/article/10.1007/s10664-019-09685-x">Gender differences in participation and reward on Stack Overflow</a>. Empirical Software Engineering, 1-23.</p>
<p>Audris Mockus. 2010. <a href="http://doi.acm.org/10.1145/1882291.1882311" target="_blank">Organizational volatility and its effects on software defects</a>. In Proceedings of the eighteenth ACM SIGSOFT international symposium on Foundations of software engineering (FSE '10). ACM, New York, NY, USA, 117-126.</p>
<p id="mockusherbsleb">Audris Mockus and James D. Herbsleb. 2002. <a href="http://dx.doi.org/10.1145/581339.581401" target="_blank">Expertise browser: a quantitative approach to identifying expertise</a>. In Proceedings of the 24th International Conference on Software Engineering (ICSE '02). ACM, New York, NY, USA, 503-512.</p>
<p id="perlow">Perlow, L. A. (1999). <a href="http://journals.sagepub.com/doi/abs/10.2307/2667031" target="_blank">The time famine: Toward a sociology of work time</a>. Administrative science quarterly, 44(1), 57-81.</p>
<p>Pikkarainen, M., Haikara, J., Salo, O., Abrahamsson, P., & Still, J. (2008). <a href="http://dl.acm.org/citation.cfm?id=1380667" target="_blank">The impact of agile practices on communication in software development</a>. Empirical Software Engineering, 13(3), 303-337.</p>
<p id="rigby">Peter C. Rigby, Yue Cai Zhu, Samuel M. Donadelli, and Audris Mockus. 2016. <a href="https://doi.org/10.1145/2884781.2884851">Quantifying and mitigating turnover-induced knowledge loss: case studies of chrome and a project at Avaya</a>. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 1006-1016.</p>
<p id="santos">Ronnie E. S. Santos, Fabio Q. B. da Silva, Cleyton V. C. de Magalh&atildees, and Cleviton V. F. Monteiro. 2016. <a href="https://doi.org/10.1145/2884781.2884837">Building a theory of job rotation in software engineering from an instrumental case study</a>. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 971-981.</p>
<p>Sfetsos, P., Stamelos, I., Angelis, L., & Deligiannis, I. (2009). <a href="https://link.springer.com/article/10.1007/s10664-008-9093-5" target="_blank">An experimental investigation of personality types impact on pair effectiveness in pair programming</a>. Empirical Software Engineering, 14(2), 187.</p>
<p id="treudestory1">Christoph Treude and Margaret-Anne Storey. 2011. <a href="http://dx.doi.org/10.1145/2025113.2025129" target="_blank">Effective communication of software development knowledge through community portals</a>. In Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering (ESEC/FSE '11). ACM, New York, NY, USA, 91-101.</p>
<p>Christoph Treude and Margaret-Anne Storey. 2009. <a href="http://dx.doi.org/10.1109/ICSE.2009.5070504" target="_blank">How tagging helps bridge the gap between social and technical aspects in software development</a>. In Proceedings of the 31st International Conference on Software Engineering (ICSE '09). IEEE Computer Society, Washington, DC, USA, 12-22.</p>
<p>Keiji Uemura and Miki Ohori. 1984. <a href="http://dl.acm.org/citation.cfm?id=801955" target="_blank">A cooperative approach to software development by application engineers and software engineers</a>. In Proceedings of the 7th international conference on Software engineering (ICSE '84). IEEE Press, Piscataway, NJ, USA, 86-96.</p>
<p id="washington">Alicia Nicki Washington. 2020. <a href="https://doi.org/10.1145/3328778.3366792">When Twice as Good Isn't Enough: The Case for Cultural Competence in Computing</a>. Proceedings of the 51st ACM Technical Symposium on Computer Science Education. 2020.</p>
<p id="zhou">Minghui Zhou and Audris Mockus. 2011. <a href="https://doi.org/10.1145/1985793.1985831" target="_blank">Does the initial environment impact the future of developers?</a> In Proceedings of the 33rd International Conference on Software Engineering (ICSE '11). ACM, New York, NY, USA, 271-280.</p>
</small>
<h2>Podcasts</h2>
<small>
<p>Software Engineering Daily. <a href="https://softwareengineeringdaily.com/2016/06/13/female-pursuit-computer-science-jennifer-wang/" target="_blank">Female Pursuit of Computer Science with Jennifer Wang</a>.</p>
<p>Software Engineering Daily. <a href="https://softwareengineeringdaily.com/2016/03/14/state-programming-jeff-atwood/" target="_blank">The State of Programming with Stack Overflow Co-Founder Jeff Atwood</a>.</p>
</small>
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-10917999-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</body>
</html>

View file

@ -1,232 +1,6 @@
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Bootstrap requires jQuery -->
<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<!-- Load some Lora -->
<link href="https://fonts.googleapis.com/css2?family=Lora:ital,wght@0,400;0,700;1,400;1,700&display=swap" rel="stylesheet">
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<!-- Optional theme -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap-theme.min.css" integrity="sha384-rHyoN1iRsVXV4nD0JutlnGaslCJuC7uwjduW9SVrLvRYooPp2bWYgmgJQIXwl/Sp" crossorigin="anonymous">
<!-- Latest compiled and minified JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<link rel="stylesheet" href="style.css" />
<title>Comprehension</title>
<meta http-equiv="refresh" content="0; URL=http://faculty.uw.edu/ajko/books/cooperative-software-development/#/comprehension" />
</head>
<body>
<p><a href="index.html">Back to table of contents</a></p>
<img src="images/network.png" class="img-responsive" />
<small>Credit: public domain</small>
<h1>Program Comprehension</h1>
<div class="lead">Amy J. Ko</div>
<p>Despite all of the activities that we've talked about so far&mdash;communicating, coordinating, planning, designing, architecting&mdash;really, most of a software engineers time is spent reading code (<a href="#maalej">Maalej et al. 2014</a>). Sometimes this is their own code, which makes this reading easier. Most of the time, it is someone else's code, whether it's a teammate's, or part of a library or API you're using. We call this reading <strong>program comprehension</strong>.</p>
<p>Being good at program comprehension is a critical skill. You need to be able to read a function and know what it will do with its inputs; you need to be able to read a class and understand its state and functionality; you also need to be able to comprehend a whole implementation, understanding its architecture. Without these skills, you can't test well, you can't debug well, and you can't fix or enhance the systems you're building or maintaining. In fact, studies of software engineers' first year at their first job show that a significant majority of their time is spent trying to simply comprehend the architecture of the system they are building or maintaining and understanding the processes that are being followed to modify and enhance them (<a href="#dagenais">Dagenais et al. 2010</a>).</p>
<p>What's going on when developers comprehend code? Usually, developers are trying to answer questions about code that help them build larger models of how a program works. Because program comprehension is hard, they avoid it when they can, relying on explanations from other developers rather than trying to build precise models of how a program works on their own (<a href="#roehm">Roehm et al. 2012</a>). When they do try to comprehend code, developers are usually trying to answer questions. Several studies have many general questions that developers must be able to answer in order to understand programs (<a href="#sillito">Sillito et al. 2006</a>, <a href="#latoza">LaToza & Myers 2010</a>). Here are over forty common questions that developers ask:</p>
<table class="table table-striped">
<tr>
<td>Which type represents this domain concept or this UI element or action?</td>
<td>Where in the code is the text in this error message or UI element?</td>
</tr>
<tr>
<td>Where is there any code involved in the implementation of this behavior?</td>
<td>Is there an entity named something like this in that unit (for example in a project, package or class)?</td>
</tr>
<tr>
<td>What are the parts of this type?</td>
<td>Which types is this type a part of?</td>
</tr>
<tr>
<td>Where does this type fit in the type hierarchy?</td>
<td>Does this type have any siblings in the type hierarchy?</td>
</tr>
<tr>
<td>Where is this field declared in the type hierarchy?</td>
<td>Who implements this interface or these abstract methods?</td>
</tr>
<tr>
<td>Where is this method called or type referenced?</td>
<td>When during the execution is this method called?</td>
</tr>
<tr>
<td>Where are instances of this class created?</td>
<td>Where is this variable or data structure being accessed?</td>
</tr>
<tr>
<td>What data can we access from this object?</td>
<td>What does the declaration or definition of this look like?</td>
</tr>
<tr>
<td>What are the arguments to this function?</td>
<td>What are the values of these arguments at runtime?</td>
</tr>
<tr>
<td>What data is being modified in this code?</td>
<td>How are instances of these types created and assembled?</td>
</tr>
<tr>
<td>How are these types or objects related?</td>
<td>How is this feature or concern (object ownership, UI control, etc) implemented?</td>
</tr>
<tr>
<td>What in this structure distinguishes these cases?</td>
<td>What is the "correct" way to use or access this data structure?</td>
</tr>
<tr>
<td>How does this data structure look at runtime?</td>
<td>How can data be passed to (or accessed at) this point in the code?</td>
</tr>
<tr>
<td>How is control getting (from here to) here?</td>
<td>Why isn't control reaching this point in the code?</td>
</tr>
<tr>
<td>Which execution path is being taken in this case?</td>
<td>Under what circumstances is this method called or exception thrown?</td>
</tr>
<tr>
<td>What parts of this data structure are accessed in this code?</td>
<td>How does the system behavior vary over these types or cases?</td>
</tr>
<tr>
<td>What are the differences between these files or types?</td>
<td>What is the difference between these similar parts of the code (e.g., between sets of methods)?</td>
</tr>
<tr>
<td>What is the mapping between these UI types and these model types?</td>
<td>How can we know this object has been created and initialized correctly?</td>
</tr>
</table>
<p>If you think about the diversity of questions in this list, you can see why program comprehension requires expertise. You not only need to understand programming languages quite well, but you also need to have strategies for answering all of the questions above (and more) quickly, effectively, and accurately.</p>
<p>
So how do developers go about answering these questions?
Studies comparing experts and novices show that experts use prior knowledge that they have about architecture, design patterns, and the problem domain a program is built for to know what questions to ask and how to answer them, whereas novices use surface features of code, which leads them to spend considerable time reading code that is irrelevant to a question (<a href="#vonmay">von Mayrhauser & Vans 1994</a>, <a href="latoza2">LaToza et al. 2007</a>).
Reading and comprehending source code is fundamentally different from those of reading and comprehending natural language (<a href="#binkley">Binkley et al. 2013</a>); what experts are doing is ultimately reasoning about <strong>dependencies</strong> between code (<a href="#weiser">Weiser 1981</a>).
Dependencies include things like <strong>data dependencies</strong> (where a variable is used to compute something, what modifies a data structure, how data flows through a program, etc.) and <strong>control dependencies</strong> (which components call which functions, which events can trigger a function to be called, how a function is reached, etc.).
All of the questions above fundamentally get at different types of data and control dependencies.
In fact, theories of how developers navigate code by following these dependencies are highly predictive of what information a developer will seek next (<a href="#fleming">Fleming et al. 2013</a>), suggesting that expert behavior is highly procedural.
This work, and work explicitly investigating the role of identifier names (<a href="#lawrie">Lawrie et al. 2006</a>), finds that names are actually critical to facilitating higher level comprehension of program behavior.
</p>
<p>
Of course, program comprehension is not an inherently individual process either.
Expert developers are resourceful, and frequently ask others for explanations of program behavior.
Some of this might happen between coworkers, where someone seeking insight asks other engineers for summaries of program behavior, to accelerate their learning (<a href="#koinfo">Ko et al. 2007</a>).
Others might rely on public forums, such as Stack Overflow, for explanations of API behavior (<a href="#mamykina">Mamykina et al. 2011</a>).
These social help seeking strategies are strongly mediated by a developers' willingness to express that they need help to more expert teammates.
Some research, for example, has found that junior developers are reluctant to ask for help out of fear of looking incompetent, even when everyone on a team is willing to offer help and their manager prefers that the developer prioritize productivity over fear of stigma (<a href="#begel">Begel and Simon, 2008</a>).
And then, of course, learning is just hard.
For example, one study investigated the challenges that developers face in learning new programming languages, finding that unlearning old habits, shifting to new language paradigms, learning new terminology, and adjusting to new tools all required materials that could bridge from their prior knowledge to the new language, but few such materials existed (<a href="#shrestha">Shrestha et al. 2020</a>).
These findings suggest the critical importance of teams ensuring that newcomers view them as psychologically safe places, where vulnerable actions like expressing a need for help will not be punished, ridiculed, or shamed, but rather validated, celebrated, and encouraged.
</p>
<p>
While much of program comprehension is individual and social skill, some aspects of program comprehension are determined by the design of programming languages.
For example, some programming languages result in programs that are more comprehensible.
One framework called the <em>Cognitive Dimensions of Notations</em> (<a href="#green">Green 1989</a>) lays out some of the tradeoffs in programming language design that result in these differences in comprehensibility.
For example, one of the dimensions in the framework is <strong>consistency</strong>, which refers to how much of a notation can be <em>guessed</em> based on an initial understanding of a language.
JavaScript has low consistency because of operators like <code>==</code>, which behave differently depending on what the type of the left and right operands are.
Knowing the behavior for Booleans doesn't tell you the behavior for a Boolean being compared to an integer.
In contrast, Java is a high consistency language: <code>==</code> is only ever valid when both operands are of the same type.
</p>
<p>
These differences in notation can have some impact.
Encapsulation through data structures leads to better comprehension that monolithic or purely functional languages (<a href="#woodfield">Woodfield et al. 1981</a>, <a href="#bhattacharya">Bhattacharya & Neamtiu 2011</a>).
Declarative programming paradigms (like CSS or HTML) have greater comprehensibility than imperative languages (<a href="#salvaneschi">Salvaneschi et al. 2014</a>).
Statically typed languages like Java (which require developers to declare the data type of all variables) result in fewer defects (<a href="#ray">Ray et la. 2014</a>), better comprehensibility because of the ability to construct better documentation (<a href="#endrikat">Endrikat et al. 2014</a>), and result in easier debugging (<a href="#hanenberg">Hanenberg et al. 2013</a>).
In fact, studies of more dynamic languages like JavaScript and Smalltalk (<a href="#callau">Calla&uacute et al. 2013</a>) show that the dynamic features of these languages aren't really used all that much anyway.
Despite all of these measurable differences, the impact of notation seems to be modest in practice (<a href="#ray">Ray et al. 2014</a>).
All of this evidence suggests that that the more you tell a compiler about what your code means (by declaring types, writing functional specifications, etc.), the more it helps the other developers know what it means too, but that this doesn't translate into huge differences in defects.
</p>
<p>Code editors, development environments, and program comprehension tools can also be helpful. Early evidence showed that simple features like syntax highlighting and careful typographic choices can improve the speed of program comprehension (<a href="#baecker">Baecker 1988</a>). I have also worked on several tools to support program comprehension, including the Whyline, which automates many of the more challenging aspects of navigating dependencies in code, and visualizes them (<a href="#ko">Ko & Myers 2009</a>):</p>
<p class="embed-responsive embed-responsive-16by9">
<iframe class="embed-responsive-item" src="https://www.youtube.com/embed/pbElN8nfe3k" scrolling="no" webkitAllowFullScreen mozallowfullscreen allowFullScreen></iframe>
</p>
<p>The path from novice to expert in program comprehension is one that involves understanding programming language semantics exceedingly well and reading <em>a lot</em> of code, design patterns, and architectures. Anticipate that as you develop these skills, it will take you time to build robust understandings of what a program is doing, slowing down your writing, testing, and debugging.</p>
<center class="lead"><a href="verification.html">Next chapter: Verification</a></center>
<h2>Further reading</h2>
<small>
<p id="baecker">R. Baecker. 1988. <a href="http://ieeexplore.ieee.org/abstract/document/93716/" target="_blank">Enhancing program readability and comprehensibility with tools for program visualization</a>. In Proceedings of the 10th international conference on Software engineering (ICSE '88). IEEE Computer Society Press, Los Alamitos, CA, USA, 356-366.</p>
<p id="begel">Begel, A., & Simon, B. (2008, September). <a href="https://doi.org/10.1145/1404520.1404522">Novice software developers, all over again</a>. In Proceedings of the fourth international workshop on computing education research (pp. 3-14).</p>
<p id="bhattacharya">Pamela Bhattacharya and Iulian Neamtiu. 2011. <a href="https://doi.org/10.1145/1985793.1985817" target="_blank">Assessing programming language impact on development and maintenance: a study on C and C++</a>. In Proceedings of the 33rd International Conference on Software Engineering (ICSE '11). ACM, New York, NY, USA, 171-180.</p>
<p id="binkley">Binkley, D., Davis, M., Lawrie, D., Maletic, J. I., Morrell, C., & Sharif, B. (2013). <a href="https://link.springer.com/article/10.1007/s10664-012-9201-4" target="_blank">The impact of identifier style on effort and comprehension</a>. Empirical Software Engineering, 18(2), 219-276.</p>
<p id="callau">Calla&uacute, O., Robbes, R., Tanter, &Eacute., & R&oumlthlisberger, D. (2013). <a href="https://doi.org/10.1145/1985441.1985448" target="_blank">How (and why) developers use the dynamic features of programming languages: the case of Smalltalk</a>. Empirical Software Engineering, 18(6), 1156-1194.
<p id="dagenais">Barth&eacutel&eacutemy Dagenais, Harold Ossher, Rachel K. E. Bellamy, Martin P. Robillard, and Jacqueline P. de Vries. 2010. <a href="http://dx.doi.org/10.1145/1806799.1806842" target="_blank">Moving into a new software project landscape</a>. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1 (ICSE '10), Vol. 1. ACM, New York, NY, USA, 275-284.</p>
<p id="endrikat">Stefan Endrikat, Stefan Hanenberg, Romain Robbes, and Andreas Stefik. 2014. <a href="https://doi.org/10.1145/2568225.2568299" target="_blank">How do API documentation and static typing affect API usability?</a> In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 632-642.</p> <p id="green">Green, T. R. (1989). Cognitive dimensions of notations. People and computers V, 443-460.</p>
<p id="fleming">Fleming, S. D., Scaffidi, C., Piorkowski, D., Burnett, M., Bellamy, R., Lawrance, J., & Kwan, I. (2013). <a href="https://doi.org/10.1145/2430545.2430551" target="_blank">An information foraging theory perspective on tools for debugging, refactoring, and reuse tasks</a>. ACM Transactions on Software Engineering and Methodology (TOSEM), 22(2), 14.</p>
<p id="hanenberg">Stefan Hanenberg, Sebastian Kleinschmager, Romain Robbes, &Eacuteric Tanter, Andreas Stefik. <a href="https://doi.org/10.1007/s10664-013-9289-1" target="_blank">An empirical study on the impact of static typing on software maintainability</a>. Empirical Software Engineering. 2013.</p>
<p id="koinfo">Amy J. Ko, Rob DeLine, and Gina Venolia (2007). <a href="https://doi.org/10.1109/ICSE.2007.45">Information needs in collocated software development teams</a>. In 29th International Conference on Software Engineering, 344-353).
<p id="ko">Amy J. Ko and Brad A. Myers (2009, April). <a href="https://doi.org/10.1145/1518701.1518942" target="_blank">Finding causes of program output with the Java Whyline</a>. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1569-1578).</p>
<p id="latoza">Thomas D. LaToza and Brad A. Myers. 2010. <a href="http://dx.doi.org/10.1145/1806799.1806829" target="_blank">Developers ask reachability questions</a>. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1 (ICSE '10), Vol. 1. ACM, New York, NY, USA, 185-194.</p>
<p id="latoza2">Thomas D. LaToza, David Garlan, James D. Herbsleb, and Brad A. Myers. 2007. <a href="http://dx.doi.org/10.1145/1287624.1287675" target="_blank">Program comprehension as fact finding</a>. In Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering (ESEC-FSE '07). ACM, New York, NY, USA, 361-370.</p>
<p id="lawrie">Lawrie, D., Morrell, C., Feild, H., & Binkley, D. (2006, June). What's in a name? a study of identifiers. IEEE International Conference on Program Comprehension, 3-12.</p>
<p id="maalej">Walid Maalej, Rebecca Tiarks, Tobias Roehm, and Rainer Koschke. 2014. <a href="http://dx.doi.org/10.1145/2622669" target="_blank">On the Comprehension of Program Comprehension</a>. ACM Transactions on Software Engineering and Methodology. 23, 4, Article 31 (September 2014), 37 pages.</p>
<p id="mamykina">Mamykina, L., Manoim, B., Mittal, M., Hripcsak, G., & Hartmann, B. (2011, May). <a href="https://doi.org/10.1145/1978942.1979366">Design lessons from the fastest q&a site in the west</a>. In Proceedings of the SIGCHI conference on Human factors in computing systems, 2857-2866.</p>
<p id="vonmay">A. von Mayrhauser and A. M. Vans. 1994. <a href="http://dl.acm.org/citation.cfm?id=257741" target="_blank">Comprehension processes during large scale maintenance</a>. In Proceedings of the 16th international conference on Software engineering (ICSE '94). IEEE Computer Society Press, Los Alamitos, CA, USA, 39-48.</p>
<p id="ray">Baishakhi Ray, Daryl Posnett, Vladimir Filkov, and Premkumar Devanbu. 2014. <a href="http://dx.doi.org/10.1145/2635868.2635922" target="_blank">A large scale study of programming languages and code quality in GitHub</a>. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2014). ACM, New York, NY, USA, 155-165.</p>
<p id="roehm">Tobias Roehm, Rebecca Tiarks, Rainer Koschke, and Walid Maalej. 2012. <a href="http://dl.acm.org/citation.cfm?id=2337254" target="_blank">How do professional developers comprehend software?</a> In Proceedings of the 34th International Conference on Software Engineering (ICSE '12). IEEE Press, Piscataway, NJ, USA, 255-265.</p>
<p id="salvaneschi">Guido Salvaneschi, Sven Amann, Sebastian Proksch, and Mira Mezini. 2014. <a href="https://doi.org/10.1145/2635868.2635895" target="_blank">An empirical study on program comprehension with reactive programming</a>. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2014). ACM, New York, NY, USA, 564-575.</p>
<p id="shrestha">Shrestha, N., Botta, C., Barik, T., & Parnin, C. (2020, May). <a href="http://nischalshrestha.me/docs/cross_language_interference.pdf">Here We Go Again: Why Is It Difficult for Developers to Learn Another Programming Language?</a>. International Conference on Software Engineering.</p>
<p id="sillito">Jonathan Sillito, Gail C. Murphy, and Kris De Volder. 2006. <a href="http://dx.doi.org/10.1145/1181775.1181779" target="_blank">Questions programmers ask during software evolution tasks</a>. In Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering (SIGSOFT '06/FSE-14). ACM, New York, NY, USA, 23-34.</p>
<p id="woodfield">S. N. Woodfield, H. E. Dunsmore, and V. Y. Shen. 1981. <a href="http://dl.acm.org/citation.cfm?id=802534" target="_blank">The effect of modularization and comments on program comprehension</a>. In Proceedings of the 5th international conference on Software engineering (ICSE '81). IEEE Press, Piscataway, NJ, USA, 215-223.</p>
<p id="stefik">Andreas Stefik and Susanna Siebert. 2013. <a href="https://doi.org/10.1145/2534973" target="_blank">An Empirical Investigation into Programming Language Syntax</a>. ACM Transactions on Computing Education 13, 4, Article 19 (November 2013), 40 pages.</p>
<p id="tao">Yida Tao, Yingnong Dang, Tao Xie, Dongmei Zhang, and Sunghun Kim. 2012. <a href="http://dx.doi.org/10.1145/2393596.2393656" target="_blank">How do software engineers understand code changes? An exploratory study in industry</a>. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering (FSE '12). ACM, New York, NY, USA, , Article 51 , 11 pages.</p>
<p id="weiser">Mark Weiser. 1981. <a href="http://dl.acm.org/citation.cfm?id=802557" target="_blank">Program slicing</a>. In Proceedings of the 5th international conference on Software engineering (ICSE '81). IEEE Press, Piscataway, NJ, USA, 439-449.</p>
</small>
<h2>Podcasts</h2>
<small>
<p>Software Engineering Daily, <a href="https://softwareengineeringdaily.com/2016/01/06/language-design-with-brian-kernighan/" target="_blank">Language Design with Brian Kernighan</a>.</p>
</small>
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-10917999-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</body>
</html>

View file

@ -1,166 +1,6 @@
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Bootstrap requires jQuery -->
<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<!-- Load some Lora -->
<link href="https://fonts.googleapis.com/css2?family=Lora:ital,wght@0,400;0,700;1,400;1,700&display=swap" rel="stylesheet">
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<!-- Optional theme -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap-theme.min.css" integrity="sha384-rHyoN1iRsVXV4nD0JutlnGaslCJuC7uwjduW9SVrLvRYooPp2bWYgmgJQIXwl/Sp" crossorigin="anonymous">
<!-- Latest compiled and minified JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<link rel="stylesheet" href="style.css" />
<title>Debugging</title>
<meta http-equiv="refresh" content="0; URL=http://faculty.uw.edu/ajko/books/cooperative-software-development/#/debugging" />
</head>
<body>
<p><a href="index.html">Back to table of contents</a></p>
<img src="images/swatter.png" class="img-responsive" />
<small>Credit: public domain</small>
<h1>Debugging</h1>
<div class="lead">Amy J. Ko</div>
<p>
Despite all of your hard work at design, implementation, and verification, your software has failed.
Somewhere in its implementation there's a line of code, or multiple lines of code, that, given a particular set of inputs, causes the program to fail.
How do you find those defective lines of code?
You debug, and when you're doing debugging right, you do it systematically (<a href="#zeller2">Zeller 2009</a>).
And yet, despite decades of research and practice, most developers have weak debugging skills, don't know how to property use debugging tools, and still rely in basic print statements (<a href="#beller">Beller et al. 2018</a>).
</p>
<p>
To remedy this, let's discuss some of the basic skills involved in debugging.
</p>
<h2>Finding the defect</h2>
<p>To start, you have to <strong>reproduce</strong> the failure. Failure reproduction is a matter of identifying inputs to the program (whether data it receives upon being executed, user inputs, network traffic, or any other form of input) that causes the failure to occur. If you found this failure while <em>you</em> were executing the program, then you're lucky: you should be able to repeat whatever you just did and identify the inputs or series of inputs that caused the problem, giving you a way of testing that the program no longer fails once you've fixed the defect. If someone else was the one executing the program (for example, a user, or someone on your team), you better hope that they reported clear steps for reproducing the problem. When bug reports lack clear reproduction steps, bugs often can't be fixed (<a href="#bettenburg">Bettenburg et al. 2008</a>).</p>
<p>If you can reproduce the problem, the next challenge is to <strong>localize</strong> the defect, trying to identify the cause of the failure in code. There are many different strategies for localizing defects. At the highest level, one can think of this process as a hypothesis testing activity (<a href="#gilmore">Gilmore 1991</a>):</p>
<ol>
<li>Observe failure</li>
<li>Form hypothesis of cause of failure</li>
<li>Devise a way to test hypothesis, such as analyzing the code you believe caused it or executing the program with the reproduction steps and stopping at the line you believe is wrong.</li>
<li>If the hypothesis was supported (meaning the program failed for the reason you thought it did), stop. Otherwise, return to 1.</li>
</ol>
<p>The problems with the strategy above are numerous. First, what if you can't think of a possible cause? Second, what if your hypothesis is way off? You could spend <em>hours</em> generating hypotheses that are completely off base, effectively analyzing all of your code before finding the defect.</p>
<p>Another strategy is working backwards (<a href="#ko">Ko & Myers 2008</a>):</p>
<ol>
<li>Observe failure</li>
<li>Identify the line of code that caused the failing output</li>
<li>Identify the lines of code that caused the line of code in step 2 and any data used on the line in step 2</li>
<li>Repeat three recursively, analyzing all lines of code for defects along the chain of causality</li>
</ol>
<p>The nice thing about this strategy is that you're <em>guaranteed</em> to find the defect if you can accurately identify the causes of each line of code contributing to the failure. It still requires you to analyze each line of code and potentially execute to it in order to inspect what might be wrong, but it requires potentially less work than guessing. My dissertation work investigated how to automate this strategy, allowing you to simply click on the fault output and then immediately see all upstream causes of it (<a href="#ko">Ko & Myers 2008</a>).</p>
<p>Yet another strategy called <em>delta debugging</em> is to compare successful and failing executions of the program (<a href="#zeller">Zeller 2002</a>):</p>
<ol>
<li>Identify a successful set of inputs</li>
<li>Identify a failing set of inputs</li>
<li>Compare the differences in state from the successful and failing executions</li>
<li>Identify a change to input that minimizes the differences in states between the two executions</li>
<li>Variables and values that are different in these two executions contain the defect</li>
</ol>
<p>This is a powerful strategy, but only when you have successful inputs and when you can automate comparing runs and identifying changes to inputs.</p>
<p>One of the simplest strategies is to work forward:</p>
<ol>
<li>Execute the program with the reproduction steps</li>
<li>Step forward one instruction at a time until the program deviates from intended behavior</li>
<li>This step that deviates or one of the previous steps caused the failure</li>
</ol>
<p>This strategy is easy to follow, but can take a <em>long</em> time because there are so many instructions that can execute.</p>
<p>For particularly complex software, it can sometimes be necessary to debug with the help of teammates, helping to generate hypotheses, identify more effective search strategies, or rule out the influence of particular components in a bug (<a href="#aranda">Aranda and Venolia 2009</a>).</p>
<p>
Ultimately, all of these strategies are essentially search algorithms, seeking the events that occurred while a program executed with a particular set of inputs that caused its output to be incorrect.
Because programs execution millions and potentially billions of instructions, these strategies are necessary to reduce the scope of your search.
This is where debugging <strong>tools</strong> come in: if you can find a tool that supports an effective strategy, then your work to search through those millions and billions of instructions will be greatly accelerated.
This might be a print statement, a breakpoint debugger, a performance profiler, or one of the many advanced debugging tools beginning to emerge from research.
</p>
<h2>Fixing defects</h2>
<p>
Once you've found the defect, what do you do?
It turns out that there are usually many ways to repair a defect.
How professional developers fix defects depends a lot on the circumstances: if they're near a release, they may not even fix it if it's too risky; if there's no pressure, and the fix requires major changes, they may refactor or even redesign the program to prevent the failure (<a href="#murphyhill">Murphy-Hill et al. 2013</a>).
This can be a delicate, risky process: in one study of open source operating systems bug fixes, 27% of the incorrect fixes were made by developers who had never read the source code files they changed, suggesting that key to correct fixes is a deep comprehension of exactly how the defective code is intended to behave (<a href="#yin">Yin et al. 2011</a>).
</p>
<p>
This risks suggest the importance of <strong>impact analysis</strong>, the activity of systematically and precisely analyzing the consequences of some proposed fix.
This can involve analyzing dependencies that are affected by a bug fix, re-running manual and automated tests, and perhaps even running users tests to ensure that the way in which you fixed a bug does not inadvertently introduce problems with usability or workflow.
Debugging is therefore like surgery: slow, methodical, purposeful, and risk-averse.
</p>
<center class="lead"><a href="index.html">Back to table of contents</a></center>
<h2>Further reading</h2>
<small>
<p id="aranda">Jorge Aranda and Gina Venolia. 2009. <a href="http://dx.doi.org/10.1109/ICSE.2009.5070530">The secret life of bugs: Going past the errors and omissions in software repositories</a>. In Proceedings of the 31st International Conference on Software Engineering (ICSE '09). IEEE Computer Society, Washington, DC, USA, 298-308.</p>
<p id="beller">Beller, M., Spruit, N., Spinellis, D., & Zaidman, A. (2018, May). <a href="https://doi.org/10.1145/3180155.3180175">On the dichotomy of debugging behavior among programmers</a>. In 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE) (pp. 572-583).<p>
<p id="bettenburg">Nicolas Bettenburg, Sascha Just, Adrian Schr&oumlter, Cathrin Weiss, Rahul Premraj, and Thomas Zimmermann. 2008. <a href="http://dx.doi.org/10.1145/1453101.1453146">What makes a good bug report?</a> In Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering (SIGSOFT '08/FSE-16). ACM, New York, NY, USA, 308-318.</p>
<p id="gilmore">Gilmore, D. (1991). <a href="http://www.sciencedirect.com/science/article/pii/000169189190009O">Models of debugging</a>. Acta Psychologica, 78, 151-172.</p>
<p id="ko">Amy J. Ko and Brad A. Myers. 2008. <a href="http://dx.doi.org/10.1145/1368088.1368130">Debugging reinvented: asking and answering why and why not questions about program behavior</a>. In Proceedings of the 30th international conference on Software engineering (ICSE '08). ACM, New York, NY, USA, 301-310.</p>
<p id="murphyhill">Emerson Murphy-Hill, Thomas Zimmermann, Christian Bird, and Nachiappan Nagappan. 2013. <a href="http://dl.acm.org/citation.cfm?id=2486833">The design of bug fixes</a>. In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 332-341.</p>
<p id="yin">Zuoning Yin, Ding Yuan, Yuanyuan Zhou, Shankar Pasupathy, and Lakshmi Bairavasundaram. 2011. <a href="http://dx.doi.org/10.1145/2025113.2025121">How do fixes become bugs?</a> In Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering (ESEC/FSE '11). ACM, New York, NY, USA, 26-36.</p>
<p id="zeller">Andreas Zeller. 2002. <a href="http://dx.doi.org/10.1145/587051.587053">Isolating cause-effect chains from computer programs</a>. In Proceedings of the 10th ACM SIGSOFT symposium on Foundations of software engineering (SIGSOFT '02/FSE-10). ACM, New York, NY, USA, 1-10.</p>
<p id="zeller2">Zeller, A. (2009). <a href="https://books.google.com/books?id=_63Bm4LAdDIC&lpg=PP1&ots=TAzo27xsK-&dq=why%20programs%20fail&lr&pg=PP1#v=onepage&q=why%20programs%20fail&f=false">Why programs fail: a guide to systematic debugging</a>. Elsevier.</p>
</small>
<h2>Podcasts</h2>
<small>
<p>Software Engineering Daily, <a href="https://softwareengineeringdaily.com/2016/11/19/debugging-stories-with-haseeb-qureshi/">Debugging Stories with Haseeb Qureshi</a></p>
</small>
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-10917999-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</body>
</html>

1
deploy.zsh Normal file
View file

@ -0,0 +1 @@
ssh ajko@ovid.u.washington.edu "cd public_html/books/cooperative-software-development && git pull"

View file

@ -1,169 +1,6 @@
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Bootstrap requires jQuery -->
<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<!-- Load some Lora -->
<link href="https://fonts.googleapis.com/css2?family=Lora:ital,wght@0,400;0,700;1,400;1,700&display=swap" rel="stylesheet">
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<!-- Optional theme -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap-theme.min.css" integrity="sha384-rHyoN1iRsVXV4nD0JutlnGaslCJuC7uwjduW9SVrLvRYooPp2bWYgmgJQIXwl/Sp" crossorigin="anonymous">
<!-- Latest compiled and minified JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<link rel="stylesheet" href="style.css" />
<title>History</title>
<meta http-equiv="refresh" content="0; URL=http://faculty.uw.edu/ajko/books/cooperative-software-development/#/history" />
</head>
<body>
<p><a href="index.html">Back to table of contents</a></p>
<img src="images/Hamilton.jpg" class="img-responsive" />
<small>Margaret Hamilton working on the Apollo flight software. <a href="By NASA - http://www.nasa.gov/50th/50th_magazine/scientists.html, Public Domain, https://commons.wikimedia.org/w/index.php?curid=37255847">Credit</a>.</small>
<h1>History</h1>
<div class="lead">Amy J. Ko</div>
<p>
Computers haven't been around for long.
If you read one of the many histories of computing and information, such as James Gleick's <a href="https://www.amazon.com/Information-History-Theory-Flood/dp/1400096235">The Information</a>, or <a href="https://www.amazon.com/Tool-Partner-Evolution-Human-Computer-Interaction/dp/1627059636">Jonathan Grudin's History of HCI</a>, you'll learn that before <em>digital</em> computers, computers were people, calculating things manually, as portrayed in the film <a href="https://en.wikipedia.org/wiki/Hidden_Figures">Hidden Figures</a> (watch it if you haven't!). And that <em>after</em> digital computers, programming wasn't something that many people did.
It was reserved for whoever had access to the mainframe and they wrote their programs on punchcards like the one above.
Computing was in no way a ubiquitous, democratized activity&mdash;it was reserved for the few that could afford and maintain a room-sized machine.
</p>
<p>
Because programming required such painstaking planning in machine code and computers were slow, most programs were not that complex.
Their value was in calculating things faster than a person could do by hand, which meant thousands of calculations in a minute rather than one calculation in a minute.
Computer programmers were not solving problems that had no solutions; they were translating existing solutions (for example, a quadratic formula) into the notation a computer understood.
Their power wasn't in creating new realities or facilitating new tasks, it was accelerating old tasks.
</p>
<p>
The birth of software engineering, therefore, did not come until programmers started solving problems that <em>didn't</em> have existing solutions, or were new ideas entirely.
Most of these were done in academic contexts to develop things like basic operating systems and methods of input and output.
These were complex projects, but as research, they didn't need to scale; they just needed to work.
It wasn't until the late 1960s when the first truly large software projects were attempted commercially, and software had to actually perform.
</p>
<p>
The IBM 360 operating system was one of the first big projects of this kind.
Suddenly, there were multiple people working on multiple components, all which interacted with one another.
Each part of the program needed to coordinate with the others, which usually meant that each part's <em>authors</em> needed to coordinate, and the term <em>software engineering</em> was born.
Programmers and academics from around the world, especially those who were working on big projects, created conferences so they could meet and discuss their challenges.
In the <a href="http://homepages.cs.ncl.ac.uk/brian.randell/NATO/nato1968.PDF">first software engineering conference</a> in 1968, attendees speculated about why projects were shipping late, why they were over budget, and what they could do about it.
</p>
<p>
At the time, one of the key people behind coining the phrase software engineering was <a href="https://en.wikipedia.org/wiki/Margaret_Hamilton_(scientist)">Margaret Hamilton</a>, a computer scientist who was Director of the Software Engineering Division of the MIT Instrumentation Laboratory.
One of the lab's key projects in the late 1960's was developing the on-board flight software for the Apollo space program.
Hamilton led the development of error detection and recovery, the information displays, the lunar lander, and many other critical components, while managing a team of other computer scientists who helped.
It was as part of this project that many of the central problems in software engineering began to emerge, including verification of code, coordination of teams, and managing versions.
This led to one of her passions, which was giving software legitimacy as a form of engineering&mdash; at the time, it was viewed as routine, uninteresting, and simple work.
Her leadership in the field established the field as a core part of systems engineering.
</p>
<p>
The first conference, the IBM 360 project, and Hamilton's experiences on the Apollo mission identified many problems that had no clear solutions:
</p>
<ul>
<li>When you're solving a problem that doesn't yet have a solution, what is a good process for building a solution?</li>
<li>When software does so many different things, how can you know software "works"?</li>
<li>How can you make progress when <em>no one</em> on the team understands every part of the program?</li>
<li>When people leave a project, how do you ensure their replacement has all of the knowledge they had?</li>
<li>When no one understands every part of the program, how do you diagnose defects?</li>
<li>When people are working in parallel, how do you prevent them from clobbering each other's work?</li>
<li>If software engineering is about more than coding, what skills does a good coder need to have?</li>
<li>What kinds of tools and languages can accelerate a programmers work and help them prevent mistakes?</li>
<li>How can projects not lose sight of the immense complexity of human needs, values, ethics, and policy that interact with engineering decisions?</li>
</ul>
<p>
These questions are at the foundation of the field of software engineering and are the core content of this course.
Some of them have pretty good answers.
For example, the research community rapidly converged toward the concept of a version control systems, software testing, and a wide array of high-level programming languages such as Fortran (chronicled by <a href="#metcalf">Metcalf 2002</a>), LISP (chronicled <a href="mccarthy">McCarthy 1978</a>), C++ (chronicled by <a href="#stroustrup">Stroustrup 1996</a>), and Smalltalk (chronicled by <a href="#kay">Kay 1996</a>), all of which were precursors to today's modern languages such as Java, Python, and JavaScript.
</p>
<p>
Other questions, particularly those concerning the <em>human</em> aspects of software engineering, have been hopelessly difficult to understand and improve.
One of the seminal books on these issues was Fred P. Brooks, Jr.'s <em>The Mythical Man Month</em>.
In it, he presented hundreds of claims about software engineering.
For example, he hypothesized that adding more programmers to a project would actually make productivity <em>worse</em> at some level, not better, because knowledge sharing would be an immense but necessary burden.
He also claimed that the <em>first</em> implementation of a solution is usually terrible and should be treated like a prototype: used for learning and then discarded.
These and other claims have been the foundation of decades of years of research, all in search of some deeper answer to the questions above.
</p>
<p>
Other social aspects of software engineering have received considerably less treatment.
For example, despite the central role of women in programming the first digital computers, and the central role of women like Margaret Hamilton and Grace Hopper leading the formation of software engineering as a field in research and government, these histories are often forgotten, erased, and overshadowed by the gradual shift from software development being a field dominated by women to a field dominated by men.
Many texts are beginning to document the central role of sexism that was at the heart of causing this culture shift (e.g., <a href="#abbate">Abbate 2012</a>).
Similarly, software engineering research and practice has largely ignored the way that software can encode, amplify, and reinforce discrimination by encoding it into data, algorithms, and software architectures (e.g., <a href="#benjamin">Benjamin, 2019</a>).
These histories show that, just like any other human activity, there are strong cultural forces that shape how people engineer software together, what they engineer, and what affect that has on society.
</p>
<p>
If we step even further beyond software engineering as an activity and think more broadly about the role that software is playing in society today, there are also other, newer questions that we've only begun to answer.
If every part of society now runs on code, what responsibility do software engineers have to ensure that code is right?
What responsibility do software engineers have to avoid algorithmic bias?
If our cars are to soon drive us around, who's responsible for the first death: the car, the driver, or the software engineers who built it, or the company that sold it?
These ethical questions are in some ways the <em>future</em> of software engineering, likely to shape its regulatory context, its processes, and its responsibilities.
</p>
<p>
There are also <em>economic</em> roles that software plays in society that it didn't before.
Around the world, software is a major source of job growth, but also a major source of automation, eliminating jobs that people used to do.
These larger forces that software is playing on the world demand that software engineers have a stronger understanding of the roles that software plays in society, as the decisions that engineers make can have profoundly impactful unintended consequences.
</p>
<p>
We're nowhere close to having deep answers about these questions, neither the old ones or the new ones.
We know <em>a lot</em> about programming languages and <em>a lot</em> about testing.
These are areas amenable to automation and so computer science has rapidly improved and accelerated these parts of software engineering.
The rest of it, as we shall see in this, has not made much progress.
In this class, we'll discuss what we know and the much larger space of what we don't.
</p>
<center class="lead"><a href="organizations.html">Next chapter: Organizations</a></center>
<h2>Further reading</h2>
<p id="abbate">Abbate, Janet (2012). <a href="https://mitpress.mit.edu/books/recoding-gender">Recoding Gender: Women's Changing Participation in Computing</a>. The MIT Press.</a>
<p id="benjamin">Benjamin, R. (2019). Race after Technology: Abolitionist Tools for the New Jim Code. Social Forces.</p>
<p>Brooks Jr, F. P. (1995). <a href="https://books.google.com/books?id=Yq35BY5Fk3gC" target="_blank">The Mythical Man-Month (anniversary ed.)</a>. Chicago</p>
<p>Gleick, James (2011). <a href="https://books.google.com/books?id=617JSFW0D2kC" target="_blank">The Information: A History, A Theory, A Flood</a>. Pantheon Books.</p>
<p>Grudin, Jonathan (2017). <a href="https://books.google.com/books?id=Wc3hDQAAQBAJ" target="_blank">From Tool to Partner: The Evolution of Human-Computer Interaction</a>.</p>
<p id="kay">Kay, A. C. (1996, January). <a href="http://dl.acm.org/citation.cfm?id=1057828" target="_blank">The early history of Smalltalk</a>. In History of programming languages---II (pp. 511-598). ACM.</p>
<p>Ko, A. J. (2016). <a href="http://softwareengineeringdaily.com/2016/02/24/academia-to-industry-in-computer-science-with-andy-ko/">Interview with Andrew Ko on Software Engineering Daily about Software Engineering Research and Practice</a>.</p>
<p id="mccarthy">McCarthy, J. (1978, June). <a href="http://dl.acm.org/citation.cfm?id=1198360" target="_blank">History of LISP</a>. In History of programming languages I (pp. 173-185). ACM.</p>
<p id="metcalf">Metcalf, M. (2002, December). <a href="http://dl.acm.org/citation.cfm?id=602379" target="_blank">History of Fortran</a>. In ACM SIGPLAN Fortran Forum (Vol. 21, No. 3, pp. 19-20). ACM.</p>
<p id="stroustrup">Stroustrup, B. (1996, January). <a href="http://dl.acm.org/citation.cfm?id=1057836" target="_blank">A history of C++: 1979--1991</a>. In History of programming languages---II (pp. 699-769). ACM.</p>
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-10917999-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</body>
</html>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 100 KiB

After

Width:  |  Height:  |  Size: 78 KiB

BIN
images/error.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

BIN
images/monitoring.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 364 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 65 KiB

BIN
images/zoho.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

View file

@ -1,109 +1,27 @@
<!DOCTYPE html>
<html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Bootstrap CSS -->
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css" integrity="sha384-JcKb8q3iqJ61gNV9KGb8thSsNjpSL0n8PARn9HuZOnIxN0hoP+VmmDGMN5t9UJ0Z" crossorigin="anonymous">
<!-- Bootstrap requires jQuery -->
<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<!-- Bootstrap jQuery -->
<script src="https://code.jquery.com/jquery-3.4.1.slim.min.js" integrity="sha384-J6qa4849blE2+poT4WnyKhv5vZF5SrPo0iEjwBvKU7imGFAV0wwj1yYfoRSJoZ+n" crossorigin="anonymous"></script>
<!-- Load some Lora -->
<!-- Load Lora font -->
<link href="https://fonts.googleapis.com/css2?family=Lora:ital,wght@0,400;0,700;1,400;1,700&display=swap" rel="stylesheet">
<!-- Bootstrap requires jQuery -->
<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<!-- Puruse style -->
<link rel="stylesheet" href="http://faculty.washington.edu/ajko/books/peruse/peruse.css">
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<!-- Optional theme -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap-theme.min.css" integrity="sha384-rHyoN1iRsVXV4nD0JutlnGaslCJuC7uwjduW9SVrLvRYooPp2bWYgmgJQIXwl/Sp" crossorigin="anonymous">
<!-- Latest compiled and minified JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<link rel="stylesheet" href="style.css" />
<title>Cooperative Software Development</title>
<!-- Peruse app -->
<script src="http://faculty.washington.edu/ajko/books/peruse/build/peruse.js"></script>
</head>
<body>
<img src="images/cover.jpg" class="img-responsive" />
<small>Credit: Creative Commons</small>
<h1>Cooperative Software Development</h1>
<div class="lead"><a href="http://faculty.uw.edu/ajko">Amy J. Ko</a> <small>with contributions from <a href="http://benjixie.com/">Benjamin Xie</a></small></div>
<table class="table">
<tr>
<td rowspan="14" width="40%">
<p>After teaching software engineering for many years, I've been frustrated by the lack of a simple, concise, and practical introduction to the human aspects of software engineering for students interested in becoming software engineers.</p>
<p> In response, I've distilled my lectures from the past decade into these brief writings. They don't represent <em>everything</em> we know about software engineering (in particular, I don't discuss the deep technical contributions from the field), but the chapters do synthesize the broad evidence we have about how teams have to work together to succeed.</p>
<p>I hope you enjoy! If you see something missing or wrong, <a href="https://github.com/andyjko/cooperative-software-development">Submit an issue or a pull request on GitHub</a>.</p>
</td>
<td>Chapter 1</td><td><a href="history.html">History</a></td>
</tr>
<tr>
<td>Chapter 2</td><td><a href="organizations.html">Organizations</a></td>
</tr>
<tr>
<td>Chapter 3</td><td><a href="communication.html">Communication</a></td>
</tr>
<tr>
<td>Chapter 4</td><td><a href="productivity.html">Productivity</a></td>
</tr>
<tr>
<td>Chapter 5</td><td><a href="quality.html">Quality</a></td>
</tr>
<tr>
<td>Chapter 6</td><td><a href="requirements.html">Requirements</a></td>
</tr>
<tr>
<td>Chapter 7</td><td><a href="architecture.html">Architecture</a></td>
</tr>
<tr>
<td>Chapter 8</td><td><a href="specifications.html">Specifications</a></td>
</tr>
<tr>
<td>Chapter 9</td><td><a href="process.html">Process</a></td>
</tr>
<tr>
<td>Chapter 10</td><td><a href="comprehension.html">Comprehension</a></td>
</tr>
<tr>
<td>Chapter 11</td><td><a href="verification.html">Verification</a></td>
</tr>
<tr>
<td>Chapter 12</td><td><a href="monitoring.html">Monitoring</a></td>
</tr>
<tr>
<td>Chapter 13</td><td><a href="evolution.html">Evolution</a></td>
</tr>
<tr>
<td>Chapter 14</td><td><a href="debugging.html">Debugging</a></td>
</tr>
</table>
<h2>Revision history</h2>
<ul>
<li><em>July 2020</em>. Revised all chapters to address racism, sexism, and ableism in software engineering.</li>
<li><em>July 2019</em>. Incorporated newly published work from ICSE, ESEC/FSE, SIGCSE, TSE, and TOSEM.</li>
<li><em>July 2018</em>. Incorporated newly published work from ICSE, ESEC/FSE, SIGCSE, TSE, and TOSEM.</li>
<li><em>July 2017</em>. First draft of the book release.</li>
</ul>
<p><small>
<p>This material is based upon work supported by the National Science Foundation under Grant No. <a target="_blank" href="https://www.nsf.gov/awardsearch/showAward?AWD_ID=0952733">0952733</a>. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
</p>
<p><a rel="license" href="http://creativecommons.org/licenses/by-nd/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nd/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nd/4.0/">Creative Commons Attribution-NoDerivatives 4.0 International License</a></p>
</small></p>
<body onload="peruse('book.json')">
<script type="text/javascript">
@ -119,9 +37,5 @@
</script>
</body>
<body>
</html>

View file

@ -1,139 +1,6 @@
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Bootstrap requires jQuery -->
<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<!-- Load some Lora -->
<link href="https://fonts.googleapis.com/css2?family=Lora:ital,wght@0,400;0,700;1,400;1,700&display=swap" rel="stylesheet">
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<!-- Optional theme -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap-theme.min.css" integrity="sha384-rHyoN1iRsVXV4nD0JutlnGaslCJuC7uwjduW9SVrLvRYooPp2bWYgmgJQIXwl/Sp" crossorigin="anonymous">
<!-- Latest compiled and minified JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<link rel="stylesheet" href="style.css" />
<!-- UPDATE -->
<title>Monitoring</title>
<meta http-equiv="refresh" content="0; URL=http://faculty.uw.edu/ajko/books/cooperative-software-development/#/monitoring" />
</head>
<body>
<p><a href="index.html">Back to table of contents</a></p>
<img src="images/police.jpg" class="img-responsive" />
<small>Credit: public domain</small>
<h1>Monitoring</h1>
<div class="lead">Amy J. Ko</div>
<p>The first application I ever wrote was a complete and utter failure.</p>
<p>I was an eager eighth grader, full of wonder and excitement about the infinite possibilities in code, with an insatiable desire to build, build, build. I'd made plenty of little games and widgets for myself, but now was my chance to create something for someone else: my friend and I were making a game and he needed a tool to create pixel art for it. We had no money for fancy Adobe licenses, and so I decided to make a tool.</p>
<p>In designing the app, I made every imaginable software engineering mistake. I didn't talk to him about requirements. I didn't test on his computer before sending the finished app. I certainly didn't conduct any usability tests, performance tests, or acceptance tests. The app I ended up shipping was a pure expression of what I wanted to build, not what he needed to be creative or productive. As a result, it was buggy, slow, confusing, and useless, and blinded by my joy of coding, I had no clue.</p>
<p>Now, ideally my "customer" would have reported any of these problems to me right away, and I would have learned some tough lessons about software engineering. But this customer was my best friend, and also a very nice guy. He wasn't about to trash all of my hard work. Instead, he suffered in silence. He struggled to install, struggled to use, and worst of all struggled to create. He produced some amazing art a few weeks after I gave him the app, but it was only after a few months of progress on our game that I learned he hadn't used my app for a single asset, preferring instead to suffer through Microsoft Paint. My app was too buggy, too slow, and too confusing to be useful. I was devastated.</p>
<p>Why didn't I know it was such a complete failure? <strong>Because I wasn't looking</strong>. I'd ignored the ultimate test suite: <em>my customer</em>. I'd learned that the only way to really know whether software requirements are right is by watching how it executes in the world through <strong>monitoring</strong>.</p>
<h2>Discovering Failures</h2>
<p>Of course, this is easier said than done. That's because the (ideally) massive numbers of people executing your software is not easily observable (<a href="#menzies">Menzies & Zimmerman 2013</a>). Moreover, each software quality you might want to monitor (performance, functional correctness, usability) requires entirely different methods of observation and analysis. Let's talk about some of the most important qualities to monitor and how to monitor them.</p>
<p>These are some of the easiest failures to detect because they are overt and unambiguous. Microsoft was one of the first organizations to do this comprehensively, building what eventually became known as Windows Error Reporting (<a href="#glerum">Gelrum et al 2009</a>). It turns out that actually capturing these errors at scale and mining them for repeating, reproducible failures is quite complex, requiring classification, progressive data collection, and many statistical techniques to extract signal from noise. In fact, Microsoft has a dedicated team of data scientists and engineers whose sole job is to manage the error reporting infrastructure, monitor and triage incoming errors, and use trends in errors to make decisions about improvements to future releases and release processes. This is now standard practice in most companies and organizations, including other big software companies (Google, Apple, IBM, etc.), as well as open source projects (eg, Mozilla). In fact, many application development platforms now include this as a standard operating system feature.</p>
<p>Performance, like crashes, kernel panics, and hangs, is easily observable in software, but a bit trickier to characterize as good or bad. How slow is too slow? How bad is it if something is slow occasionally? You'll have to define acceptable thresholds for different use cases to be able to identify problems automatically. Some experts in industry <a href="https://softwareengineeringdaily.com/2016/12/27/performance-monitoring-with-andi-grabner/">still view this as an art</a>.</p>
<p>It's also hard to monitor performance without actually <em>harming</em> performance. Many tools and services (e.g., <a href="https://newrelic.com/">New Relic</a>) are getting better at reducing this overhead and offering real time data about performance problems through sampling.</p>
<p>Monitoring for data breaches, identity theft, and other security and privacy concerns are incredibly important parts of running a service, but also very challenging. This is partly because the tools for doing this monitoring are not yet well integrated, requiring each team to develop its own practices and monitoring infrastructure. But it's also because protecting data and identity is more than just detecting and blocking malicious payloads. It's also about recovering from ones that get through, developing reliable data streams about application network activity, monitoring for anomalies and trends in those streams, and developing practices for tracking and responding to warnings that your monitoring system might generate. Researchers are still actively inventing more scalable, usable, and deployable techniques for all of these activities.</p>
<p>The biggest limitation of the monitoring above is that it only reveals <em>what</em> people are doing with your software, not <em>why</em> they are doing it, or why it has failed. Monitoring can help you know that a problem exists, but it can't tell you why a program failed or why a persona failed to use your software successfully.</p>
<h2>Discovering Missing Requirements</h2>
<p>Usability problems and missing features, unlike some of the preceding problems, are even harder to detect or observe, because the only true indicator that something is hard to use is in a user's mind. That said, there are a couple of approaches to detecting the possibility of usability problems.</p>
<p>One is by monitoring application usage. Assuming your users will tolerate being watched, there are many techniques: 1) automatically instrumenting applications for user interaction events, 2) mining events for problematic patterns, and 3) browsing and analyzing patterns for more subjective issues (<a href="#ivory">Ivory & Hearst 2001</a>). Modern tools and services like <a href="https://www.intercom.com/">Intercom</a> make it easier to capture, store, and analyze this usage data, although they still require you to have some upfront intuition about what to monitor. More advanced, experimental techniques in research automatically analyze undo events as indicators of usability problems (<a href="#akers">Akers et al. 2009</a>); this work observes that undo is often an indicator of a mistake in creative software, and mistakes are often indicators of usability problems.</p>
<p>All of the usage data above can tell you <em>what</em> your users are doing, but not <em>why</em>. For this, you'll need to get explicit feedback from support tickets, support forums, product reviews, and other critiques of user experience. Some of these types of reports go directly to engineering teams, becoming part of bug reporting systems, while others end up in customer service or marketing departments. While all of this data is valuable for monitoring user experience, most companies still do a bad job of using anything but bug reports to improve user experience, overlooking the rich insights in customer service interactions (<a href="#chilana2">Chilana et al. 2011</a>).</p>
<p>Although bug reports are widely used, they have significant problems as a way to monitor: for developers to fix a problem, they need detailed steps to reproduce the problem, or stack traces or other state to help them track down the cause of a problem (<a href="#bettenburg">Bettenburg et al. 2008</a>); these are precisely the kinds of information that are hard for users to find and submit, given that most people aren't trained to produce reliable, precise information for failure reproduction. Additionally, once the information is recorded in a bug report, even <em>interpreting</em> the information requires social, organizational, and technical knowledge, meaning that if a problem is not addressed soon, an organization's ability to even interpret what the failure was and what caused it can decay over time (<a href="#aranda">Aranda & Venolia 2009</a>). All of these issues can lead to <a href="https://softwareengineeringdaily.com/2016/11/19/debugging-stories-with-haseeb-qureshi/">intractable debugging challenges</a>.</p>
<p>Larger software organizations now employ data scientists to help mitigate these challenges of analyzing and maintaining monitoring data and bug reports. Most of them try to answer questions such as (<a href="#begel">Begel & Zimmermann 2014</a>):</p>
<ul>
<li>"How do users typically use my application?"</li>
<li>"What parts of a software product are most used and/or loved by customers?"</li>
<li>"What are best key performance indicators (KPIs) for monitoring services?"</li>
<li>"What are the common patterns of execution in my application?"</li>
<li>"How well does test coverage correspond to actual code usage by our customers?"</li>
</ul>
<p>
The most mature data science roles in software engineering teams even have multiple distinct roles, including <em>Insight Providers</em>, who gather and analyze data to inform decisions, <em>Modeling Specialists</em>, who use their machine learning expertise to build predictive models, <em>Platform Builders</em>, who create the infrastructure necessary for gathering data (<a href="#kim">Kim et al. 2016</a>).
Of course, smaller organizations may have individuals who take on all of these roles.
Moreover, not all ways of discovering missing requirements are data science roles.
Many companies, for example, have customer experience specialists and community managers, who are less interested in data about experiences and more interested in directly communicating with customers about their experiences.
These relational forms of monitoring can be much more effective at revealing software quality issues that aren't as easily observed, such as issues of racial or sexual bias in software or other forms of structural injustices built into the architecture of software.
</p>
<p>All of this effort to capture and maintain user feedback can be messy to analyze because it usually comes in the form of natural language text. Services like <a href="http://answerdash.com">AnswerDash</a> (a company I co-founded) structure this data by organizing requests around frequently asked questions. AnswerDash imposes a little widget on every page in a web application, making it easy for users to submit questions and find answers to previously asked questions. This generates data about the features and use cases that are leading to the most confusion, which types of users are having this confusion, and where in an application the confusion is happening most frequently. This product was based on several years of research in my lab (<a href="#chilana">Chilana et al. 2013</a>).</p>
<center class="lead"><a href="evolution.html">Next chapter: Evolution</a></center>
<h2>Further reading</h2>
<small>
<p id="akers">David Akers, Matthew Simpson, Robin Jeffries, and Terry Winograd. 2009. <a href="http://dx.doi.org/10.1145/1518701.1518804" target="_blank">Undo and erase events as indicators of usability problems</a>. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '09). ACM, New York, NY, USA, 659-668.</p>
<p id="aranda">Jorge Aranda and Gina Venolia. 2009. <a href="http://dl.acm.org/citation.cfm?id=1555045" target="_blank">The secret life of bugs: Going past the errors and omissions in software repositories</a>. In Proceedings of the 31st International Conference on Software Engineering (ICSE '09). IEEE Computer Society, Washington, DC, USA, 298-308.</p>
<p id="begel">Begel, A., & Zimmermann, T. (2014). <a href="https://doi.org/10.1145/2568225.2568233" target="_blank">Analyze this! 145 questions for data scientists in software engineering</a>. In Proceedings of the 36th International Conference on Software Engineering (pp. 12-23).</a>
<p id="bettenburg">Nicolas Bettenburg, Sascha Just, Adrian Schr&oumlter, Cathrin Weiss, Rahul Premraj, and Thomas Zimmermann. 2008. <a href="http://dx.doi.org/10.1145/1453101.1453146" target="_blank">What makes a good bug report?</a> In Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering (SIGSOFT '08/FSE-16). ACM, New York, NY, USA, 308-318.</p>
<p id="chilana">Chilana, P. K., Ko, A. J., Wobbrock, J. O., & Grossman, T. (2013). <a href="http://dl.acm.org/citation.cfm?id=2470685" target="_blank">A multi-site field study of crowdsourced contextual help: usage and perspectives of end users and software teams</a>. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 217-226).</p>
<p id="chilana2">Parmit K. Chilana, Amy J. Ko, Jacob O. Wobbrock, Tovi Grossman, and George Fitzmaurice. 2011. <a href="http://dx.doi.org/10.1145/1978942.1979270" target="_blank">Post-deployment usability: a survey of current practices</a>. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 2243-2246.</p>
<p id="glerum">Kirk Glerum, Kinshuman Kinshumann, Steve Greenberg, Gabriel Aul, Vince Orgovan, Greg Nichols, David Grant, Gretchen Loihle, and Galen Hunt. 2009. <a href="http://dx.doi.org/10.1145/1629575.1629586" target="_blank">Debugging in the (very) large: ten years of implementation and experience</a>. In Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles (SOSP '09). ACM, New York, NY, USA, 103-116.</p>
<p id="ivory">Ivory M.Y., Hearst, M.A. (2001). <a href="http://doi.acm.org/10.1145/503112.503114" target="_blank">The state of the art in automating usability evaluation of user interfaces</a>. ACM Computing Surveys, 33(4).</p>
<p id="menzies">Menzies, T., & Zimmermann, T. (2013). <a href="https://www.computer.org/csdl/magazine/so/2013/04/mso2013040031/13rRUyY28Wp">Software analytics: so what?</a> IEEE Software, 30(4), 31-37.</a>
<p id="kim">Miryung Kim, Thomas Zimmermann, Robert DeLine, and Andrew Begel. 2016. <a href="https://doi.org/10.1145/2884781.2884783" target="_blank">The emerging role of data scientists on software development teams</a>. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 96-107.</p>
</small>
<h2>Podcasts</h2>
<small>
<p>Software Engineering Daily, <a href="https://softwareengineeringdaily.com/2016/12/27/performance-monitoring-with-andi-grabner/" target="_blank">Performance Monitoring with Andi Grabner</a></p>
<p>Software Engineering Daily, <a href="https://softwareengineeringdaily.com/2016/07/28/2739/" target="_blank">The Art of Monitoring with James Turnbull</a></p>
<p>Software Engineering Daily, <a href="https://softwareengineeringdaily.com/2016/11/19/debugging-stories-with-haseeb-qureshi/" target="_blank">Debugging Stories with Haseeb Qureshi</a>
</small>
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-10917999-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</body>
</html>

View file

@ -1,141 +1,6 @@
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Bootstrap requires jQuery -->
<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<!-- Load some Lora -->
<link href="https://fonts.googleapis.com/css2?family=Lora:ital,wght@0,400;0,700;1,400;1,700&display=swap" rel="stylesheet">
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<!-- Optional theme -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap-theme.min.css" integrity="sha384-rHyoN1iRsVXV4nD0JutlnGaslCJuC7uwjduW9SVrLvRYooPp2bWYgmgJQIXwl/Sp" crossorigin="anonymous">
<!-- Latest compiled and minified JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<link rel="stylesheet" href="style.css" />
<title>Organizations</title>
<meta http-equiv="refresh" content="0; URL=http://faculty.uw.edu/ajko/books/cooperative-software-development/#/organizations" />
</head>
<body>
<p><a href="index.html">Back to table of contents</a></p>
<img src="images/team.jpg" class="img-responsive" />
<small>A software engineering team hard at work. Credit: Amy J. Ko</small>
<h1>Organizations</h1>
<div class="lead">Amy J. Ko</div>
<p>The photo above is a candid shot of some of the software engineers of <a href="http://answerdash.com">AnswerDash</a>, a company I co-founded in 2012. There are a few things to notice. First, you see one of the employees explaining something, while others are diligently working off to the side. It's not a huge team; just a few engineers, plus several employees in other parts of the organization in another room. This, as simple as it looks, is pretty much what all software engineering work looks like. Some organizations have one of these teams; others have thousands.</p>
<p>What you <em>can't</em> see is just how much <em>complexity</em> underlies this work. You can't see the organizational structures that exist to manage this complexity. Inside this room and the rooms around it were processes, standards, reviews, workflows, managers, values, culture, decision making, analytics, marketing, sales. And at the center of it were people executing all of these things as well as they could to achieve the organization's goal.</p>
<p>Organizations are a much bigger topic than I could possibly address here. To deeply understand them, you'd need to learn about <a href="https://en.wikipedia.org/wiki/Organizational_studies" target="_blank">organizational studies</a>, <a href="https://en.wikipedia.org/wiki/Organizational_behavior" target="_blank">organizational behavior</a>, <a href="https://en.wikipedia.org/wiki/Information_system" target="_blank">information systems</a>, and business in general.</p>
<p>The subset of this knowledge that's critical to understand about software engineering is limited to a few important concepts. The first and most important concept is that even in software organizations, the point of the company is rarely to make software; it's to provide <strong>value</strong> <a href="#osterwalder">(Osterwalder et al. 2015)</a>. Software is sometimes the central means to providing that value, but more often than not, it's the <em>information</em> flowing through that software that's the truly valuable piece. <a href="requirements.html">Requirements</a>, which we will discuss in a later chapter, help engineers organize how software will provide value.</p>
<p>The individuals in a software organization take on different roles to achieve that value. These roles are sometimes spread across different people and sometimes bundled up into one person, depending on how the organization is structured, but the roles are always there. Let's go through each one in detail so you understand how software engineers relate to each role.</>
<ul>
<li><b>Marketers</b> look for opportunities to provide value. In for-profit businesses, this might mean conducting market research, estimating the size of opportunities, identifying audiences, and getting those audiences attention. Non-profits need to do this work as well in order to get their solutions to people, but may be driven more by solving problems than making money.</li>
<li><b>Product</b> managers decide what value the product will provide, monitoring the marketplace and prioritizing work.</li>
<li><b>Designers</b> decide <em>how</em> software will provide value. This isn't about code or really even about software; it's about envisioning solutions to problems that people have.</li>
<li><b>Software engineers</b> write code with other engineers to implement requirements envisioned by designers. If they fail to meet requirements, the design won't be implemented correctly, which will prevent the software from providing value.</li>
<li><b>Sales</b> takes the product that's been built and try to sell it to the audiences that marketers have identified. They also try to refine an organization's understanding of what the customer wants and needs, providing feedback to marketing, product, and design, which engineers then address.</li>
<li><b>Support</b> helps the people using the product to use it successfully and, like sales, provides feedback to product, design, and engineering about the product's value (or lack thereof) and it's defects.</li>
</ul>
<p>As I noted above, sometimes the roles above get merged into individuals. When I was CTO at AnswerDash, I had software engineering roles, design roles, product roles, sales roles, <em>and</em> support roles. This was partly because it was a small company when I was there. As organizations grow, these roles tend to be divided into smaller pieces. This division often means that different parts of the organization don't share knowledge, even when it would be advantageous <a href="#chilana">(Chilana 2011)</a>.</p>
<p>Note that in the division of responsibilities above, software engineers really aren't the designers by default. They don't decide what product is made or what problems that product solves. They may have opinions&mdash;and a great deal of power to enforce their opinions, as the people building the product&mdash;but it's not ultimately their decision.</p>
<p>There are other roles you might be thinking of that I haven't mentioned:</p>
<ul>
<li><strong>Engineering managers</strong> exist in all roles when teams get to a certain size, helping to move information from between higher and lower parts of an organization. Even <em>engineering</em> managers are primarily focused on organizing and prioritizing work, and not doing engineering (<a href="#kalliamvakou">Kalliamvakou et al. 2018)</a>. Much of their time is also spent ensuring every engineer has what they need to be productive, while also managing coordination and interpersonal conflict between engineers.</li>
<li><strong>Data scientists</strong>, although a new role, typically <em>facilitate</em> decision making on the part of any of the roles above <a href="#begel">(Begel & Zimmermann 2014)</a>. They might help engineers find bugs, marketers analyze data, track sales targets, mine support data, or inform design decisions. They're experts at using data to accelerate and improve the decisions made by the roles above.</li>
<li><strong>Researchers</strong>, also called user researchers, also help people in a software organization make decisions, but usually <em>product</em> decisions, helping marketers, sales, and product managers decide what products to make and who would want them. In many cases, they can complement the work of data scientists, <a href="https://www.linkedin.com/pulse/ux-research-analytics-yann-riche?trk=prof-post" target="_blank">providing qualitative work to triangulate quantitative data</a>.</li>
<li><strong>Ethics and policy specialists</strong>, who might come with backgrounds in law, policy, or social science, might shape terms of service, software licenses, algorithmic bias audits, privacy policy compliance, and processes for engaging with stakeholders affected by the software being engineered. Any company that works with data, especially those that work with data at large scales or in contexts with great potential for harm, hate, and abuse, needs significant expertise to anticipate and prevent harm from engineering and design decisions.
</ul>
<p>Every decision made in a software team is under uncertainty, and so another important concept in organizations is <strong>risk</strong> <a href="#boehm">(Boehm 1991)</a>. It's rarely possible to predict the future, and so organizations must take risks. Much of an organization's function is to mitigate the consequences of risks. Data scientists and researchers mitigate risk by increasing confidence in an organization's understanding of the market and its consumers. Engineers manage risk by trying to avoid defects. Of course, as many popular outlets on software engineering have begun to discover, when software fails, it usually "did exactly what it was told to do. The reason it failed is that it was told to do the wrong thing." (<a href="https://www.theatlantic.com/technology/archive/2017/09/saving-the-world-from-code/540393/">Somers 2017</a>).</p>
<p>
Open source communities are organizations too.
The core activities of design, engineering, and support still exist in these, but how much a community is engaged in marketing and sales depends entirely on the purpose of the community.
Big, established open source projects like <a href="https://mozilla.org" target="_blank">Mozilla</a> have revenue, buildings, and a CEO, and while they don't sell anything, they do market.
Others like Linux <a href="#lee">(Lee & Cole 2013)</a> rely heavily on contributions both from volunteers <a href="#ye">(Ye & Kishida 2003)</a>, but also paid employees from companies that depend on Linux, like IBM, Google, and others.
In these settings, there are still all of the challenges that come with software engineering, but fewer of the constraints that come from a for-profit or non-profit motive.
In fact, recent work empirically uncovered 9 reasons why modern open source projects fail: 1) lost to competition, 2) made obsolete by technology advances, 3) lack of time to volunteer, 4) lack of interest by contributors, 5) outdated technologies, 6) poor maintainability, 7) interpersonal conflicts amongst developers, 8) legal challenges, 9) and acquisition (<a href="#coelho">Coelho and Valente 2017</a>).
Another study showed that funding open source projects often requires substantial donations from large corporations; most projects don't ask for donations, and those that do receive very little, unless well-established, and most of those funds go to paying for basic expenses such as engineering salaries (<a href="overney">Overney 2020</a>)
Those aren't too different from traditional software organizations, aside from the added challenges of sustaining a volunteer workforce.
</p>
<p>All of the above has some important implications for what it means to be a software engineer:</p>
<ul>
<li>Engineers are not the only important role in a software organization. In fact, they may be less important to an organization's success than other roles because the decisions they make (how to implement requirements) have smaller impact on the organization's goals than other decisions (what to make, who to sell it to, etc.).</li>
<li>Engineers have to work with <em>a lot</em> of people working with different roles. Learning what those roles are and what shapes their success is important to being a good collaborator <a href="#li">(Li et al. 2017)</a>.</li>
<li>While engineers might have many great ideas for product, if they really want to shape what they're building, they should be in a product role, not an engineering role.</li>
</ul>
<p>All that said, without engineers, products wouldn't exist. They ensure that every detail about a product reflects the best knowledge of the people in their organization, and so attention to detail is paramount. In future chapters, we'll discuss all of the ways that software engineers manage this detail, mitigating the burden on their memories with tools and processes.</p>
<center class="lead"><a href="communication.html">Next chapter: Communication</a></center>
<h2>Further reading</h2>
<p id="begel">Begel, A., & Zimmermann, T. (2014, May). <a href="http://dl.acm.org/citation.cfm?id=2568233" target="_blank">Analyze this! 145 questions for data scientists in software engineering</a>. In Proceedings of the 36th International Conference on Software Engineering (pp. 12-23).
<p id="boehm">Boehm, B. W. (1991). <a href="http://ieeexplore.ieee.org/abstract/document/62930" target="_blank">Software risk management: principles and practices</a>. IEEE software, 8(1), 32-41.</p>
<p id="coelho">Jailton Coelho and Marco Tulio Valente. 2017. <a href="https://doi.org/10.1145/3106237.3106246">Why modern open source projects fail</a>. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2017). Association for Computing Machinery, New York, NY, USA, 186196.</p>
<p id="chilana">Chilana, P. K., Ko, A. J., Wobbrock, J. O., Grossman, T., & Fitzmaurice, G. (2011, May). <a href="http://dl.acm.org/citation.cfm?id=1979270" target="_blank">Post-deployment usability: a survey of current practices</a>. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2243-2246). ACM.
<p>Clegg, S. and Bailey, J.R. (2008). <a href="https://books.google.com/books?id=Uac5DQAAQBAJ" target="_blank">International Encyclopedia of Organization Studies</a>. Sage Publications.</p>
<p id="kalliamvakou">Kalliamvakou, E., Bird, C., Zimmermann, T., Begel, A., DeLine, R., German, D. M. <a href="https://doi.org/10.1109/TSE.2017.2768368" target="_blank">What Makes a Great Manager of Software Engineers?</a> To appear in IEEE Transactions on Software Engineering. IEEE.</p>
<p>Ko, Amy J. (2017). <a href="https://faculty.washington.edu/ajko/papers/Ko2017AnswerDashReflection.pdf" target="_blank">A Three-Year Participant Observation of Software Startup Software Evolution</a>. International Conference on Software Engineering, Software Engineering in Practice, to appear.</p>
<p id="lee">Lee, G. K., & Cole, R. E. (2003). <a href="http://pubsonline.informs.org/doi/abs/10.1287/orsc.14.6.633.24866" target="_blank">From a firm-based to a community-based model of knowledge creation: The case of the Linux kernel development</a>. Organization science, 14(6), 633-649.</p>
<p id="li">Li, Paul, Ko, Amy J., and Begel, Andrew (2017). <a href ="https://doi.org/10.1109/CHASE.2017.3">Collaborating with Software Engineers: Perspectives from Non-Software Experts.</a> In the Proceedings of the 10th International Workshop on Cooperative and Human Aspects of Software Engineering.</p>
<p id="osterwalder">A. Osterwalder, Y. Pigneur, G. Bernarda, & A. Smith (2015). <a href="https://books.google.com/books?id=jgu5BAAAQBAJ" target="_blank">Value proposition design: how to create products and services customers want</a>. John Wiley & Sons.</p>
<p id="overney">Overney, C., Meinicke, J., Kästner, C., & Vasilescu, B. (2020). <a href="https://cmustrudel.github.io/papers/overney20donations.pdf">How to Not Get Rich: An Empirical Study of Donations in Open Source</a>. International Conference on Software Engineering.</p>
<p>Somers, James (2017). <a href="https://www.theatlantic.com/technology/archive/2017/09/saving-the-world-from-code/540393/">The Coming Software Apocalypse</a>. The Atlantic Monthly.</p>
<p id="ye">Yunwen Ye and Kouichi Kishida. 2003. <a href="http://dl.acm.org/citation.cfm?id=776867" target="_blank">Toward an understanding of the motivation Open Source Software developers</a>. In Proceedings of the 25th International Conference on Software Engineering (ICSE '03). IEEE Computer Society, Washington, DC, USA, 419-429.
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-10917999-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</body>
</html>

View file

@ -1,162 +1,6 @@
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Bootstrap requires jQuery -->
<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<!-- Load some Lora -->
<link href="https://fonts.googleapis.com/css2?family=Lora:ital,wght@0,400;0,700;1,400;1,700&display=swap" rel="stylesheet">
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<!-- Optional theme -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap-theme.min.css" integrity="sha384-rHyoN1iRsVXV4nD0JutlnGaslCJuC7uwjduW9SVrLvRYooPp2bWYgmgJQIXwl/Sp" crossorigin="anonymous">
<!-- Latest compiled and minified JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<link rel="stylesheet" href="style.css" />
<title>Process</title>
<meta http-equiv="refresh" content="0; URL=http://faculty.uw.edu/ajko/books/cooperative-software-development/#/process" />
</head>
<body>
<p><a href="index.html">Back to table of contents</a></p>
<img src="images/flow.jpg" class="img-responsive" />
<small>Credit: public domain</small>
<h1>Process</h1>
<div class="lead">Amy J. Ko</div>
<p>So you know what you're going to build and how you're going to build it. What process should you go about building it? Who's going to build what? What order should you build it in? How do you make sure everyone is in sync while you're building it? And most importantly, how to do you make sure you build well and on time? These are fundamental questions in software engineering with many potential answers. Unfortunately, we still don't know which of those answers are right.</p>
<p>At the foundation of all of these questions are basic matters of <a href="https://en.wikipedia.org/wiki/Project_management" target="_blank">project management</a>: plan, execute, and monitor. But developers in the 1970's and on found that traditional project management ideas didn't seem to work. The earliest process ideas followed a "waterfall" model, in which a project begins by identifying requirements, writing specifications, implementing, testing, and releasing, all under the assumption that every stage could be fully tested and verified. (Recognize this? It's the order of topics we're discussing in this class!). Many managers seemed to like the waterfall model because it seemed structured and predictable; however, because most managers were originally software developers, they preferred a structured approach to project management (<a href="#weinberg">Weinberg 1982</a>). The reality, however, was that no matter how much verification one did of each of these steps, there always seemed to be more information in later steps that caused a team to reconsider it's earlier decision (e.g., imagine a customer liked a requirement when it was described in the abstract, but when it was actually built, they rejected it, because they finally saw what the requirement really meant).</p>
<p>In 1988, Barry Boehm proposed an alternative to waterfall called the Spiral model (<a href="#boehm">Boehm 1988</a>): rather than trying to verify every step before proceeding to the next level of detail, <em>prototype</em> every step along the way, getting partial validation, iteratively converging through a series of prototypes toward both an acceptable set of requirements <em>and</em> an acceptable product. Throughout, risk assessment is key, encouraging a team to reflect and revise process based on what they are learning. What was important about these ideas were not the particulars of Boehm's proposed process, but the disruptive idea that iteration and process improvement are critical to engineering great software.</p>
<img src="images/spiral.png" class="img-responsive" />
<p>Around the same time, two influential books were published. Fred Brooks wrote <strong>The Mythical Man Month</strong> (<a href="#brooks">Brooks 1995</a>), a book about software project management, full of provocative ideas that would be tested over the next three decades, including the idea that adding more people to a project would not necessarily increase productivity. Tom DeMarco and Timothy Lister wrote another famous book, <strong>Peopleware: Productive Projects and Teams</strong> (<a href="#demarco">DeMarco 1987</a>), arguing that the major challenges in software engineering are human, not technical. Both of these works still represent some of the most widely-read statements of the problem of managing software development.</p>
<p>These early ideas in software project management led to a wide variety of other discoveries about process. For example, organizations of all sizes can improve their process if they are very aware of what the people in the organization know, what it's capable of learning, and if it builds robust processes to actually continually improve process (<a href="dyba2">Dyb&#551; 2002</a>, <a href="#dyba">Dyb&#551; 2003</a>). This might mean monitoring the pace of work, incentivizing engineers to reflect on inefficiencies in process, and teaching engineers how to be comfortable with process change.</p>
<p>Beyond process improvement, other factors emerged. For example, researchers discovered that critical to team productivity was <strong>awareness</strong> of teammates' work (<a href="#ko">Ko et al. 2007</a>). Teams need tools like dashboards to help make them aware of changing priorities and tools like feeds to coordinate short term work (<a href="#treude">Treude & Storey 2010</a>). Moreover, researchers found that engineers tended to favor non-social sources such as documentation for factual information, but social sources for information to support problem solving (<a href="milewski">Milewski 2007</a>). Decades ago, developers used tools like email and IRC for awareness; now they use tools like <a href="https://slack.com" target="_blank">Slack</a>, <a href="https://trello.com/" target="_blank">Trello</a>, <a href="http://github.com" target="_blank">GitHub</a>, and <a href="https://www.atlassian.com/software/jira" target="_blank">JIRA</a>, which have the same basic functionality, but are much more polished, streamlined, and customizable.</p>
<p>In addition to awareness, <strong>ownership</strong> is a critical idea in process. This is the idea that for every line of code, someone is responsible for it's quality. The owner <em>might</em> be the person who originally wrote the code, but it could also shift to new team members. Studies of code ownership on Windows Vista and Windows 7 found that less a component had a clear owner, the more pre-release defects it had and the more post-release failures were reported by users (<a href="#bird">Bird et al. 2011</a>). This means that in addition to getting code written, having clear ownership and clear processes for transfer of ownership are key to functional correctness.</p>
<p><strong>Pace</strong> is another factor that affects quality. Clearly, there's a tradeoff between how fast a team works and the quality of the product it can build. In fact, interview studies of engineers at Google, Facebook, Microsoft, Intel, and other large companies found that the pressure to reduce "time to market" harmed nearly every aspect of teamwork: the availability and discoverability of information, clear communication, planning, integration with others' work, and code ownership (<a href="#rubin">Rubin & Rinard 2016</a>). Not only did a fast pace reduce quality, but it also reduced engineers' personal satisfaction with their job and their work. I encountered similar issues as CTO of my startup: while racing to market, I was often asked to meet impossible deadlines with zero defects and had to constantly communicate to the other executives in the company why this was not possible (<a href="ko2">Ko 2017</a>).</p>
<p>
Because of the importance of awareness and communication, the <strong>distance</strong> between teammates is also a critical factor.
This is most visible in companies that hire remote developers, building distributed teams, or when teams are fully distributed (such as when there is a pandemic requiring social distancing).
One motivation for doing this is to reduce costs or gain access to engineering talent that is distant from a team's geographical center, but over time, companies have found that doing so necessitates significant investments in socialization to ensure quality, minimizing geographical, temporal and cultural separation (<a href="#smite">Smite 2010</a>).
Researchers have found that there appear to be fundamental tradeoffs between productivity, quality, and/or profits in these settings (<a href="#ramasubbu">Ramasubbu et al. 2011</a>).
For example, more distance appears to lead to slower communication (<a href="#wagstrom">Wagstrom & Datta 2014</a>).
Despite these tradeoffs, most rigorous studies of the cost of distributed development have found that when companies work hard to minimize temporal and cultural separation, the actual impact on defects was small (<a href="#kocaguneli">Kocaguneli et al. 2013</a>).
These efforts to minimize separation include more structured onboarding practices, more structured communication, and more structured processes, as well as systematic efforts to build and maintain trusting social relationships.
Some researchers have begun to explore even more extreme models of distributed development, hiring contract developers to complete microtasks over a few days without hiring them as employees; early studies suggest that these models have the worst of outcomes, with greater costs, poor scalability, and more significant quality issues (<a href="#stol">Stol & Fitzgerald 2014</a>).
</p>
<p>
A critical part of ensuring all that a team is successful is having someone responsible for managing these factors of distance, pace, ownership, awareness, and overall process.
The most obvious person to oversee this is, of course, a project manager.
Research on what skills software engineering project managers need suggests that while some technical knowledge is necessary, it the soft skills necessary for managing all of these factors in communication and coordination that distinguish great managers (<a href="#kall">Kalliamvakou et al. 2017</a>).
</p>
<p>
While all of this research has strong implications for practice, industry has largely explored its own ideas about process, devising frameworks that addressed issues of distance, pace, ownership, awareness, and process improvement.
Extreme Programming (<a href="beck">Beck 1999</a>) was one of these frameworks and it was full of ideas:
</p>
<ul>
<li>Be iterative</li>
<li>Do small releases</li>
<li>Keep design simple</li>
<li>Write unit tests</li>
<li>Refactor to iterate</li>
<li>Use pair programming</li>
<li>Integrate continuously</li>
<li>Everyone owns everything</li>
<li>Use an open workspace</li>
<li>Work sane hours</li>
</ul>
<p>Note that none of these had any empirical evidence to back them. Moreover, Beck described in his original proposal that these ideas were best for "<em>outsourced or in-house development of small- to medium-sized systems where requirements are vague and likely to change</em>", but as industry often does, it began hyping it as a universal solution to software project management woes and adopted all kinds of combinations of these ideas, adapting them to their existing processes. In reality, the value of XP appears to depend on highly project-specific factors (<a href="muller">M&uuml;ller & Padberk 2013</a>), while the core ideas that industry has adopted are valuing feedback, communication, simplicity, and respect for individuals and the team (<a href="sharp">Sharp & Robinson 2004</a>). Researchers continue to investigate the merits of the list above; for example, numerous studies have investigated the effects of pair programming on defects, finding small but measurable benefits (<a href="#dibella">di Bella et al. 2012</a>)</p>
<p>At the same time, Beck began also espousing the idea of <a href="http://agilemanifesto.org/" target="_blank">"Agile" methods</a>, which celebrated many of the values underlying Extreme Programming, such as focusing on individuals, keeping things simple, collaborating with customers, and being iterative. This idea of begin agile was even more popular and spread widely in industry and research, even though many of the same ideas appeared much earlier in Boehm's work on the Spiral model. Researchers found that Agile methods can increase developer enthusiasm (<a href="syed">Syed-Abdulla et al. 2006</a>), that agile teams need different roles such as Mentor, Co-ordinator, Translator, Champion, Promoter, and Terminator (<a href="#hoda">Hoda et al. 2010</a>), and that teams are combing agile methods with all kinds of process ideas from other project management frameworks such as <a href="https://en.wikipedia.org/wiki/Scrum_(software_development)">Scrum</a> (meet daily to plan work, plan two-week sprints, maintain a backlog of work) and Kanban (visualize the workflow, limit work-in-progress, manage flow, make policies explicit, and implement feedback loops) (<a href="#al-baik">Al-Baik & Miller 2015</a>). Research has also found that transitioning a team to Agile methods is slow and complex because it requires everyone on a team to change their behavior, beliefs, and practices (<a href="#hoda2">Hoda & Noble 2017</a>).</p>
<p>Ultimately, all of this energy around process ideas in industry is exciting, but disorganized. None of these efforts really get to the core of what makes software projects difficult to manage. One effort in research to get to this core by contributing new theories that explain these difficulties. The first is Herbsleb's <strong>Socio-Technical Theory of Coordination (STTC)</strong>. The idea of the theory is quite simple: <em>technical dependencies</em> in engineering decisions (e.g., this function calls this other function, this data type stores this other data type) define the <em>social constraints</em> that the organization must solve in a variety of ways to build and maintain software (<a href="#herbslebmockus">Herbsleb & Mockus 2003</a>, <a href="#herbsleb">Herbsleb 2016</a>). The better the organization builds processes and awareness tools to ensure that the people who own those engineering dependencies are communicating and aware of each others' work, the fewer defects that will occur. Herbsleb referred this alignment as <em>sociotechnical congruence</em>, and conducted a number of studies demonstrating its predictive and explanatory power.</p>
<p>In my recent work (<a href="#ko2">Ko 2017</a>), I extend this idea to congruence with beliefs about <em>product</em> value, claiming that successful software products require the constant, collective communication and agreement of a coherent proposition of a product's value across UX, design, engineering, product, marketing, sales, support, and even customers. A team needs to achieve Herbsleb's sociotechnical congruence to have a successful product, but that alone is not enough: the rest of the organization has to have a consistent understanding of what is being built and why, even as that understanding evolves over time.</p>
<center class="lead"><a href="comprehension.html">Next chapter: Comprehension</a></center>
<h2>Further reading</h2>
<small>
<p id="al-baik">Al-Baik, O., & Miller, J. (2015). <a href="https://link.springer.com/article/10.1007/s10664-014-9340-x" target="_blank">The kanban approach, between agility and leanness: a systematic review</a>. Empirical Software Engineering, 20(6), 1861-1897.</p>
<p id="beck">Beck, K. (1999). <a href="http://ieeexplore.ieee.org/abstract/document/796139/" target="_blank">Embracing change with extreme programming</a>. Computer, 32(10), 70-77.</p>
<p id="bird">Christian Bird, Nachiappan Nagappan, Brendan Murphy, Harald Gall, and Premkumar Devanbu. 2011. <a href="http://dx.doi.org/10.1145/2025113.2025119" target="_blank">Don't touch my code! Examining the effects of ownership on software quality</a>. In Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering (ESEC/FSE '11). ACM, New York, NY, USA, 4-14.</a>
<p id="boehm">Boehm, B. W. (1988). <a href="http://ieeexplore.ieee.org/abstract/document/59/" target="_blank">A spiral model of software development and enhancement</a>. Computer, 21(5), 61-72.</p>
<p id="brooks">Brooks, F.P. (1995). <a href="https://books.google.com/books?id=Yq35BY5Fk3gC" target="_blank">The Mythical Man Month</a>.</p>
<p id="dibella">di Bella, E., Fronza, I., Phaphoom, N., Sillitti, A., Succi, G., & Vlasenko, J. (2013). <a href="https://doi.org/10.1109/TSE.2012.68">Pair Programming and Software Defects--A Large, Industrial Case Study</a>. IEEE Transactions on Software Engineering, 39(7), 930-953.</p>
<p id="demarco">DeMarco, T. and Lister, T. (1987). <a href="https://books.google.com/books?id=TVQUAAAAQBAJ" target="_blank">Peopleware: Productive Projects and Teams</a>.</p>
<p id="dyba">Tore Dyb&#551;. 2003. <a href="http://dx.doi.org/10.1145/940071.940092" target="_blank">Factors of software process improvement success in small and large organizations: an empirical study in the scandinavian context</a>. In Proceedings of the 9th European software engineering conference held jointly with 11th ACM SIGSOFT international symposium on Foundations of software engineering (ESEC/FSE-11). ACM, New York, NY, USA, 148-157.</p>
<p id="dyba2">Dyb&#551;, T. (2002). <a href="https://link.springer.com/article/10.1023/A:1020535725648" target="_blank">Enabling software process improvement: an investigation of the importance of organizational issues</a>. Empirical Software Engineering, 7(4), 387-390.</p>
<p id="herbslebmockus">James D. Herbsleb and Audris Mockus. 2003. <a href="http://dx.doi.org/10.1145/940071.940091" target="_blank">Formulation and preliminary test of an empirical theory of coordination in software engineering</a>. In Proceedings of the 9th European software engineering conference held jointly with 11th ACM SIGSOFT international symposium on Foundations of software engineering (ESEC/FSE-11). ACM, New York, NY, USA, 138-137.</p>
<p id="herbsleb">James Herbsleb. 2016. <a href="https://doi.org/10.1145/2950290.2994160" target="_blank">Building a socio-technical theory of coordination: why and how</a>. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2016). ACM, New York, NY, USA, 2-10.</p>
<p id="hoda">Rashina Hoda, James Noble, and Stuart Marshall. 2010. <a href="https://doi.org/10.1145/1806799.1806843" target="_blank">Organizing self-organizing teams</a>. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1 (ICSE '10), Vol. 1. ACM, New York, NY, USA, 285-294.</p>
<p id="hoda2">Hoda, R., & Noble, J. (2017). Becoming agile: a grounded theory of agile transitions in practice. In Proceedings of the 39th International Conference on Software Engineering (pp. 141-151). IEEE Press.</p>
<p id="kall">Kalliamvakou, E., Bird, C., Zimmermann, T., Begel, A., DeLine, R., & German, D. M. (2017). <a href="https://ieeexplore.ieee.org/abstract/document/8094304/" target="_blank">What makes a great manager of software engineers</a>? IEEE Transactions on Software Engineering.</p>
<p id="ko">Amy J. Ko, Robert DeLine, and Gina Venolia. 2007. <a href="http://dx.doi.org/10.1109/ICSE.2007.45" target="_blank">Information Needs in Collocated Software Development Teams</a>. In Proceedings of the 29th international conference on Software Engineering (ICSE '07). IEEE Computer Society, Washington, DC, USA, 344-353.</p>
<p id="ko2">Amy J. Ko (2017). <a href="http://faculty.washington.edu/ajko/papers/Ko2017AnswerDashReflection.pdf" target="_blank">A Three-Year Participant Observation of Software Startup Software Evolution</a>. International Conference on Software Engineering (ICSE), Software Engineering in Practice, to appear.</a>
<p id="kocaguneli">Ekrem Kocaguneli, Thomas Zimmermann, Christian Bird, Nachiappan Nagappan, and Tim Menzies. 2013. <a href="https://doi.org/10.1109/ICSE.2013.6606637" target="_blank">Distributed development considered harmful?</a> In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 882-890.</p>
<p id="milewski">Milewski, A. E. (2007). <a href="https://link.springer.com/article/10.1007/s10664-007-9036-6" target="_blank">Global and task effects in information-seeking among software engineers</a>. Empirical Software Engineering, 12(3), 311-326.
<p id="muller">Matthias M. M&uumlller and Frank Padberg. 2003. <a href="http://dx.doi.org/10.1145/940071.940094" tar target="_blank">On the economic evaluation of XP projects</a>. In Proceedings of the 9th European software engineering conference held jointly with 11th ACM SIGSOFT international symposium on Foundations of software engineering (ESEC/FSE-11). ACM, New York, NY, USA, 168-177.</p>
<p id="ramasubbu">Narayan Ramasubbu, Marcelo Cataldo, Rajesh Krishna Balan, and James D. Herbsleb. 2011. <a href="https://doi.org/10.1145/1985793.1985830" target="_blank">Configuring global software teams: a multi-company analysis of project productivity, quality, and profits</a>. In Proceedings of the 33rd International Conference on Software Engineering (ICSE '11). ACM, New York, NY, USA, 261-270.</p> <p id="sharp">Sharp, H., & Robinson, H. (2004). <a href="https://doi.org/10.1023/B:EMSE.0000039884.79385.54">An ethnographic study of XP practice</a>. Empirical Software Engineering, 9(4), 353-375.</p>
<p id="rubin">Julia Rubin and Martin Rinard. 2016. <a href="https://doi.org/10.1145/2884781.2884871" target="_blank">The challenges of staying together while moving fast: an exploratory study</a>. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 982-993.</p>
<p id="smite">Smite, D., Wohlin, C., Gorschek, T., & Feldt, R. (2010). <a href="https://link.springer.com/article/10.1007/s10664-009-9123-y" target="_blank">Empirical evidence in global software engineering: a systematic review</a>. Empirical software engineering, 15(1), 91-118.</p>
<p id="stol">Klaas-Jan Stol and Brian Fitzgerald. 2014. <a href="http://dx.doi.org/10.1145/2568225.2568249" target="_blank">Two's company, three's a crowd: a case study of crowdsourcing software development</a>. In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 187-198.</p>
<p id="syed">Syed-Abdullah, S., Holcombe, M., & Gheorge, M. (2006). <a href="https://link.springer.com/article/10.1007%2Fs10664-006-5968-5" target="_blank">The impact of an agile methodology on the well being of development teams</a>. Empirical Software Engineering, 11(1), 143-167.</p>
<p id="treude">Christoph Treude and Margaret-Anne Storey. 2010. <a href="http://dx.doi.org/10.1145/1806799.1806854" target="_blank">Awareness 2.0: staying aware of projects, developers and tasks using dashboards and feeds</a>. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1 (ICSE '10), Vol. 1. ACM, New York, NY, USA, 365-374.</p>
<p id="wagstrom">Patrick Wagstrom and Subhajit Datta. 2014. <a href="http://dx.doi.org/10.1145/2568225.2568279" target="_blank">Does latitude hurt while longitude kills? Geographical and temporal separation in a large scale software development project</a>. In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 199-210.</p>
<p id="weinberg">Gerald M. Weinberg. 1982. <a href="http://dl.acm.org/citation.cfm?id=807743" target="_blank">Over-structured management of software engineering</a>. In Proceedings of the 6th international conference on Software engineering (ICSE '82). IEEE Computer Society Press, Los Alamitos, CA, USA, 2-8.</p>
</small>
<h2>Podcasts</h2>
<small>
<p>Software Engineering Daily (2016). <a href="https://softwareengineeringdaily.com/2016/04/06/git-workflows-tim-pettersen/" target="_blank">Git Workflows with Tim Pettersen</a>.</p>
<p>Software Engineering Daily (2017). <a href="https://softwareengineeringdaily.com/2017/02/08/engineering-management-with-mike-borozdin/" target="_blank">Engineering Management with Mike Borozdin</a>.</p>
<p>Software Engineering Daily (2017). <a href="https://softwareengineeringdaily.com/2016/09/22/tech-leadership-with-jeff-norris/" target="_blank">Tech Leadership with Jeff Norris</a>.</p>
</small>
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-10917999-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</body>
</html>

View file

@ -1,157 +1,6 @@
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Bootstrap requires jQuery -->
<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<!-- Load some Lora -->
<link href="https://fonts.googleapis.com/css2?family=Lora:ital,wght@0,400;0,700;1,400;1,700&display=swap" rel="stylesheet">
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<!-- Optional theme -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap-theme.min.css" integrity="sha384-rHyoN1iRsVXV4nD0JutlnGaslCJuC7uwjduW9SVrLvRYooPp2bWYgmgJQIXwl/Sp" crossorigin="anonymous">
<!-- Latest compiled and minified JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<link rel="stylesheet" href="style.css" />
<!-- UPDATE -->
<title>Productivity</title>
<meta http-equiv="refresh" content="0; URL=http://faculty.uw.edu/ajko/books/cooperative-software-development/#/productivity" />
</head>
<body>
<p><a href="index.html">Back to table of contents</a></p>
<img src="images/productivity.jpg" class="img-responsive" />
<small>Credit: unknown</small>
<h1>Productivity</h1>
<div class="lead">Amy J. Ko</div>
<p>When we think of productivity, we usually have a vague concept of a rate of work per unit time. Where it gets tricky is in defining "work". On an individual level, work can be easier to define, because developers often have specific concrete tasks that they're assigned. But until they're not, it's not really easy to define progress (well, it's not that easy to define "done" sometimes either, but that's a topic for a later chapter). When you start considering work at the scale of a team or an organization, productivity gets even harder to define, since an individual's productivity might be increased by ignoring every critical request from a teammate, harming the team's overall productivity.</p>
<p>
Despite the challenge in defining productivity, there are numerous factors that affect productivity.
For example, at the individual level, having the right tools can result in an order of magnitude difference in speed at accomplishing a task.
One study I ran found that developers using the Eclipse IDE spent a third of their time just physically navigating between source files (<a href="#koide">Ko et al. 2005</a>).
With the right navigation aids, developers could be writing code and fixing bugs 30% faster.
In fact, some tools like Mylyn automatically bring relevant code to the developer rather than making them navigate to it, greatly increasing the speed which with developers can accomplish a task (<a href="#kersten">Kersten & Murphy 2006</a>).
Long gone are the days when developers should be using bare command lines and text editors to write code: IDEs can and do greatly increase productivity when used and configured with speed in mind.
</p>
<p>
Of course, individual productivity is about more than just tools.
Studies of workplace productivity show that developers have highly fragmented days, interrupted by meetings, emails, coding, and non-work distractions (<a href="#meyer">Meyer et al. 2017</a>).
These interruptions are often viewed negatively from an individual perspective, but may be highly valuable from a team and organizational perspective.
And then, productivity is not just about skills to manage time, but also many other skills that shape developer expertise, including skills in designing architectures, debugging, testing, programming languages, etc. (<a href="#baltes">Baltes et al. 2018</a>).
</p>
<p>That said, productivity is not just about individual developers. Because communication is a key part of team productivity, an individual's productivity is as much determined by their ability to collaborate and communicate with other developers. In a study spanning dozens of interviews with senior software engineers, Li et al. found that the majority of critical attributes for software engineering skill (productivity included) concerned their interpersonal skills, their communication skills, and their ability to be resourceful within their organization (<a href="#li">Li et al. 2015</a>). Similarly, LaToza et al. found that the primary bottleneck in productivity was communication with teammates, primarily because waiting for replies was slower than just looking something up (<a href="#latoza">LaToza et al. 2006</a>). Of course, looking something up has its own problems. While StackOverflow is an incredible resource for missing documentation (<a href="#mamykina">Mamykina et al. 2001</a>), it also is full of all kinds of misleading and incorrect information contributed by developers without sufficient expertise to answer questions (<a href="#barua">Barua et la. 2014</a>). Finally, because communication is such a critical part of retrieving information, adding more developers to a team has surprising effects. One study found that adding people to a team slowly enough to allow them to onboard effectively could reduce defects, but adding them too fast led to increases in defects (<a href="#meneely">Meneely et al. 2011</a>).</p>
<p>
Another dimension of productivity is learning.
Great engineers are resourceful, quick learners (<a href="#li">Li et al. 2015</a>).
New engineers must be even more resourceful, even though their instincts are often to hide their lack of expertise from exactly the people they need help from (<a href="#begel">Begel & Simon 2008</a>).
Experienced developers know that learning is important and now rely heavily on social media such as Twitter to follow industry changes, build learning relationships, and discover new concepts and platforms to learn (<a href="#singer">Singer et al. 2012</a>).
And, of course, developers now rely heavily on web search to fill in inevitable gaps in their knowledge about APIs, error messages, and myriad other details about languages and platforms (<a href="#xia">Xia et al. 2017</a>).
</p>
<p>Unfortunately, learning is no easy task. One of my earliest studies as a researcher investigated the barriers to learning new programming languages and systems, finding six distinct types of content that are challenging (<a href="#ko">Ko & Myers 2004</a>). To use a programming platform successfully, people need to overcome <em>design</em> barriers, which are the abstract computational problems that must be solved, independent of the languages and APIs. People need to overcome <em>selection</em> barriers, which involve finding the right abstractions or APIs to achieve the design they have identified. People need to overcome <em>use</em> and <em>coordination</em> barriers, which involve operating and coordinating different parts of a language or API together to achieve novel functionality. People need to overcome <em>comprehension</em> barriers, which involve knowing what can go wrong when using part of a language or API. And finally, people need to overcome <em>information</em> barriers, which are posed by the limited ability of tools to inspect a program's behavior at runtime during debugging. Every single one of these barriers has its own challenges, and developers encounter them every time they are learning a new platform, regardless of how much expertise they have.</p>
<p>Aside from individual and team factors, productivity is also influenced by the particular features of a project's code, how the project is managed, or the environment and organizational culture in which developers work (<a href="#vosburgh">Vosburgh et al. 1984</a>, <a href="#demarco">DeMarco & Lister 1985</a>). In fact, these might actually be the <em>biggest</em> factors in determining developer productivity. This means that even a developer that is highly productive individually cannot rescue a team that is poorly structured working on poorly architected code. This might be why highly productive developers are so difficult to recruit to poorly managed teams.</p>
<p>A different way to think about productivity is to consider it from a "waste" perspective, in which waste is defined as any activity that does not contribute to a product's value to users or customers. Sedano et al. investigated this view across two years and eight software development projects in a software development consultancy (<a href="#sedano">Sedano et al. 2017</a>), contributing a taxonomy of waste:</p>
<ul>
<li><strong>Building the wrong feature or product</strong>. The cost of building a feature or product that does not address user or business needs.</li>
<li><strong>Mismanaging the backlog</strong>. The cost of duplicating work, expediting lower value user features, or delaying nec- essary bug fixes.</li>
<li><strong>Rework</strong>. The cost of altering delivered work that should have been done correctly but was not.</li>
<li><strong>Unnecessarily complex solutions</strong>. The cost of creating a more complicated solution than necessary, a missed opportu- nity to simplify features, user interface, or code.</li>
<li><strong>Extraneous cognitive load</strong>. The costs of unneeded expenditure of mental energy, such as poorly written code, context switching, confusing APIs, or technical debt. </li>
<li><strong>Psychological distress</strong>. The costs of burdening the team with unhelpful stress arising from low morale, pace, or interpersonal conflict.</li>
<li><strong>Waiting/multitasking</strong>. The cost of idle time, often hidden by multi-tasking, due to slow tests, missing information, or context switching.</li>
<li><strong>Knowledge loss</strong>. The cost of re-acquiring information that the team once knew.</li>
<li><strong>Ineffective communication</strong>. The cost of incomplete, incorrect, mislead- ing, inefficient, or absent communication.</li>
</ul>
<p>One could imagine using these concepts to refine processes and practices in a team, helping both developers and managers be more aware of sources of waste that harm productivity.</p>
<p>
Of course, productivity is not only shaped by professional and organizational factors, but personal ones as well.
Consider, for example, an engineer that has friends, wealth, health care, health, stable housing, sufficient pay, and safety: they likely have everything they need to bring their full attention to their work.
In contrast, imagine an engineer that is isolated, has immense debt, has no health care, has a chronic disease like diabetes, is being displaced from an apartment by gentrification, has lower pay than their peers, or does not feel safe in public.
Any one of these factors might limit an engineer's ability to be productive at work; some people might experience multiple, or even all of these factors, especially if they are a person of color in the United States, who has faced a lifetime of racist inequities in school, health care, and housing.
Because of the potential for such inequities to influence someone's ability to work, managers and organizations need to make space for surfacing this inequities at work, so that teams can acknowledgement them, plan around them, and ideally address them through targeted supports.
Anything less tends to make engineers feel unsupported, which will only decrease their motivation to contribute to a team.
</p>
<p>
These widely varying conceptions of productivity reveal that programming in a software engineering context is about far more than just writing a lot of code.
It's about coordinating productively with a team, synchronizing your work with an organizations goals, and most importantly, reflecting on ways to change work to achieve those goals more effectively.
</p>
<center class="lead"><a href="quality.html">Next chapter: Quality</a></center>
<h2>Further reading</h2>
<small>
<p id="barua">Barua, A., Thomas, S. W., & Hassan, A. E. (2014). <a href="http://link.springer.com/article/10.1007/s10664-012-9231-y" target="_blank">What are developers talking about? an analysis of topics and trends in stack overflow</a>. Empirical Software Engineering, 19(3), 619-654.</p>
<p id="baltes">Baltes, S., & Diehl, S. (2018, October). <a href="https://doi.org/10.1145/3236024.3236061">Towards a theory of software development expertise</a>. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (pp. 187-200). ACM.</p>
<p id="begel">Begel, A., & Simon, B. (2008, September). <a href="http://dl.acm.org/citation.cfm?id=1404522" target="_blank">Novice software developers, all over again</a>. In Proceedings of the Fourth international Workshop on Computing Education Research (pp. 3-14). ACM.</p>
<p id="casalnuovo">Casey Casalnuovo, Bogdan Vasilescu, Premkumar Devanbu, and Vladimir Filkov. 2015. <a href="https://doi.org/10.1145/2786805.2786854" target="_blank">Developer onboarding in GitHub: the role of prior social links and language experience</a>. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). ACM, New York, NY, USA, 817-828.</p>
<p id="chong">Jan Chong and Tom Hurlbutt. 2007. <a href="http://dx.doi.org/10.1109/ICSE.2007.87" target="_blank">The Social Dynamics of Pair Programming</a>. In Proceedings of the 29th international conference on Software Engineering (ICSE '07). IEEE Computer Society, Washington, DC, USA, 354-363.</p>
<p id="demarco">Tom DeMarco and Tim Lister. 1985. <a href="http://dl.acm.org/citation.cfm?id=319651" target="_blank">Programmer performance and the effects of the workplace</a>. In Proceedings of the 8th international conference on Software engineering (ICSE '85). IEEE Computer Society Press, Los Alamitos, CA, USA, 268-272.</p>
<p id="duala">Ekwa Duala-Ekoko and Martin P. Robillard. 2012. <a href="http://dl.acm.org/citation.cfm?id=2337255" target="_blank">Asking and answering questions about unfamiliar APIs: an exploratory study</a>. In Proceedings of the 34th International Conference on Software Engineering (ICSE '12). IEEE Press, Piscataway, NJ, USA, 266-276.</p>
<p id="li">Paul Luo Li, Amy J. Ko, and Jiamin Zhu. 2015. <a href="http://dl.acm.org/citation.cfm?id=2818839" target="_blank">What makes a great software engineer?</a>. In Proceedings of the 37th International Conference on Software Engineering - Volume 1 (ICSE '15), Vol. 1. IEEE Press, Piscataway, NJ, USA, 700-710.</p>
<p id="johnson">Brittany Johnson, Rahul Pandita, Emerson Murphy-Hill, and Sarah Heckman. 2015. <a href="https://doi.org/10.1145/2786805.2803197" target="_blank">Bespoke tools: adapted to the concepts developers know</a>. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). ACM, New York, NY, USA, 878-881.</p>
<p id="kersten">Mik Kersten and Gail C. Murphy. 2006. <a href="http://dx.doi.org/10.1145/1181775.1181777" target="_blank">Using task context to improve programmer productivity</a>. In Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering (SIGSOFT '06/FSE-14). ACM, New York, NY, USA, 1-11.</p>
<p id="ko">Ko, A. J., Myers, B. A., & Aung, H. H. (2004, September). <a href="http://ieeexplore.ieee.org/abstract/document/1372321/" target="_blank">Six learning barriers in end-user programming systems</a>. In Visual Languages and Human Centric Computing, 2004 IEEE Symposium on (pp. 199-206). IEEE.</p>
<p id="koide">Amy J. Ko, Htet Aung, and Brad A. Myers. 2005. <a href="http://ieeexplore.ieee.org/abstract/document/1553555/" target="_blank">Eliciting design requirements for maintenance-oriented IDEs: a detailed study of corrective and perfective maintenance tasks</a>. In Proceedings of the 27th international conference on Software engineering (ICSE '05). ACM, New York, NY, USA, 126-135.</p>
<p id="latoza">Thomas D. LaToza, Gina Venolia, and Robert DeLine. 2006. <a href="http://dx.doi.org/10.1145/1134285.1134355" target="_blank">Maintaining mental models: a study of developer work habits</a>. In Proceedings of the 28th international conference on Software engineering (ICSE '06). ACM, New York, NY, USA, 492-501.</p>
<p id="mamykina">Mamykina, L., Manoim, B., Mittal, M., Hripcsak, G., & Hartmann, B. (2011, May). <a href="http://dl.acm.org/citation.cfm?id=1979366" target="_blank">Design lessons from the fastest q&a site in the west</a>. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 2857-2866).</p>
<p id="meneely">Andrew Meneely, Pete Rotella, and Laurie Williams. 2011. <a href="http://dx.doi.org/10.1145/2025113.2025128" target="_blank">Does adding manpower also affect quality? An empirical, longitudinal analysis</a>. In Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering (ESEC/FSE '11). ACM, New York, NY, USA, 81-90.</p>
<p id="meyer">Meyer, A. N., Barton, L. E., Murphy, G. C., Zimmermann, T., & Fritz, T. (2017). <a href="https://doi.org/10.1109/TSE.2017.2656886">The work life of developers: Activities, switches and perceived productivity</a>. IEEE Transactions on Software Engineering, 43(12), 1178-1193.</p>
<p id="sedano">Sedano, T., Ralph, P., & P&eacute;raire, C. (2017, May). <a href="http://dl.acm.org/citation.cfm?id=3097385">Software development waste</a>. In Proceedings of the 39th International Conference on Software Engineering (pp. 130-140). IEEE Press.</p>
<p id="singer">Leif Singer, Fernando Figueira Filho, and Margaret-Anne Storey. 2014. <a href="http://dx.doi.org/10.1145/2568225.2568305" target="_blank">Software engineering at the speed of light: how developers stay current using twitter</a>. In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 211-221.</p>
<p id="stylos">Jeffrey Stylos and Brad A. Myers. 2008. <a href="http://dx.doi.org/10.1145/1453101.1453117">The implications of method placement on API learnability</a>. In Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering (SIGSOFT '08/FSE-16). ACM, New York, NY, USA, 105-112.</p>
<p id="vosburgh">J. Vosburgh, B. Curtis, R. Wolverton, B. Albert, H. Malec, S. Hoben, and Y. Liu. 1984. <a href="http://dl.acm.org/citation.cfm?id=801963" target="_blank">Productivity factors and programming environments</a>. In Proceedings of the 7th international conference on Software engineering (ICSE '84). IEEE Press, Piscataway, NJ, USA, 143-152.</p>
<p id="xia">Xia, X., Bao, L., Lo, D., Kochhar, P. S., Hassan, A. E., & Xing, Z. (2017). <a href="https://link.springer.com/article/10.1007/s10664-017-9514-4">What do developers search for on the web?</a> Empirical Software Engineering, 22(6), 3149-3185.</p>
</small>
<h2>Podcasts</h2>
<small>
<p>Software Engineering Daily, <a href="https://softwareengineeringdaily.com/2016/11/09/reflections-of-an-old-programmer-with-ben-northrup/">Reflections of an Old Programmer</a></p>
<p>Software Engineering Daily, <a href="https://softwareengineeringdaily.com/2015/12/23/hiring-engineers-with-ammon-bartram/">Hiring Engineers with Ammon Bartram</a></p>
</small>
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-10917999-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</body>
</html>

View file

@ -1,169 +1,6 @@
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Bootstrap requires jQuery -->
<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<!-- Load some Lora -->
<link href="https://fonts.googleapis.com/css2?family=Lora:ital,wght@0,400;0,700;1,400;1,700&display=swap" rel="stylesheet">
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<!-- Optional theme -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap-theme.min.css" integrity="sha384-rHyoN1iRsVXV4nD0JutlnGaslCJuC7uwjduW9SVrLvRYooPp2bWYgmgJQIXwl/Sp" crossorigin="anonymous">
<!-- Latest compiled and minified JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<link rel="stylesheet" href="style.css" />
<title>Quality</title>
<meta http-equiv="refresh" content="0; URL=http://faculty.uw.edu/ajko/books/cooperative-software-development/#/quality" />
</head>
<body>
<p><a href="index.html">Back to table of contents</a></p>
<img src="images/pomegranate.jpg" class="img-responsive" />
<small>Credit: Anton Croos</small>
<h1>Quality</h1>
<div class="lead">Amy J. Ko</div>
<p>There are numerous ways a software project can fail: projects can be over budget, they can ship late, they can fail to be useful, or they can simply not be useful enough. Evidence clearly shows that success is highly contextual and stakeholder-dependent: success might be financial, social, physical and even emotional, suggesting that software engineering success is a multifaceted variable that cannot explained simply by user satisfaction, profitability or meeting requirements, budgets and schedules (<a href="#ralph">Ralph & Kelly 2014</a>).</p>
<p>One of the central reasons for this is that there are many distinct <b>software qualities</b> that software can have and depending on the stakeholders, each of these qualities might have more or less importance. For example, a safety critical system such as flight automation software should be reliable and defect-free, but it's okay if it's not particularly learnable&mdash;that's what training is for. A video game, however, should probably be fun and learnable, but it's fine if it ships with a few defects, as long as they don't interfere with fun (<a href="#murphy">Murphy-Hill et al. 2014</a>).</p>
<p>There are a surprisingly large number of software qualities (<a href="#boehm">Boehm 1976</a>). Some are aspects of the software implementation:</p>
<table class="table table-striped">
<tr>
<td>Correctness</td>
<td>The extent to which a program behaves according to its specification. If your specifications are ambiguous, correctness is ambiguous.</td>
</tr>
<tr>
<td>Reliability</td>
<td>The extent to which a program behaves the same way over time in the same operating environment. For example, if your online banking app crashes sometimes, it's not reliable.</td>
</tr>
<tr>
<td>Robustness</td>
<td>The extent to which a program can recover from errors or unexpected input. For example, a login form that crashes if an email is formatted properly isn't very robust. A login form that handles <em>any</em> text input is optimally robust. One can make a system more robust by breadth of errors and inputs it can handle in a reasonable way.</td>
</tr>
<tr>
<td>Performance</td>
<td>The extent to which a program uses computing resources economically. Synonymous with "fast" and "zippy". Performance is directly determined by how many instructions a program has to execute to accomplish it's operations, but it is difficult to measure because operations, inputs, and the operating environment can vary widely.</td>
</tr>
<tr>
<td>Portability</td>
<td>The extent to which an implementation can run on different platforms without being modified. For example, "universal" applications in the Apple ecosystem that can run on iPhones, iPads, and Mac OS without being modified or recompiled are highly portable.</td>
</tr>
<tr>
<td>Interoperability</td>
<td>The extent to which a system can seamlessly interact with other systems, typically through the use of standards. For example, some software systems use entirely proprietary and secret data formats and communication protocols. These are less interoperable than systems that use industry-wide standards.</td>
</tr>
<tr>
<td>Security</td>
<td>The extent to which only authorized individuals can access information in a software system.</td>
</tr>
</table>
<p>Whereas the above qualities are properties are intrinsic to the code behind a system, some qualities are a property of both the code and the developers interacting with the system's code:</p>
<table class="table table-striped">
<tr>
<td>Verifiability</td>
<td>The effort required to verify that software does what it is intended to do. For example, it is hard to verify a safety critical system without either proving it correct or testing it in a safety-critical context (which isn't safe). Take driverless cars, for example: for Google to test their software, they've had to set up thousands of paid drivers to monitor and report problems on the road. In contrast, verifying that a simple static HTML web page works correctly is as simple as opening it in a browser.</td>
</tr>
<tr>
<td>Maintainability</td>
<td>The extent to which software can be corrected, adapted, or perfected. This depends mostly on how comprehensible the implementation of a program is.</td>
</tr>
<tr>
<td>Reusability</td>
<td>The extent to which a program's components can be used for unintended purposes. APIs are quite reusable, whereas black box embedded software (like the software built into your car's traction systems) is not.</td>
</tr>
</table>
<p>Some qualities are determined by how a system is designed, and primarily concern a user's experience with a software system:</p>
<table class="table table-striped">
<tr>
<td>Learnability</td>
<td>The ease with which a person can learn to operate a program. Learnability is multi-dimensional and can be difficult to measure <a href="#grossman">(Grossman et al. 2009)</a></td>
</tr>
<tr>
<td>User efficiency</td>
<td>The speed with which a person can perform tasks with a program. For example, think about how many taps and keystrokes it takes you to log in to an app on your phone compared to using a fingerprint sensor like Apple's TouchID.</td>
</tr>
<tr>
<td>Accessibility</td>
<td>The diversity of physical or cognitive abilities that can successfully operate software. For example, something that can only be used with a mouse is less accessible than something that can be used with a mouse, keyboard, or speech. Software can be designed for all abilities, and even automatically adapted for individual abilities (<a href="#wobbrock">Wobbrock et al. 2011</a>).</td>
</tr>
<tr>
<td>Usefulness</td>
<td>The extent to which software solves a problem. Utility is often the <em>most</em> important quality because it subsumes all of the other lower-level qualities software can have (e.g., part of what makes a messaging app useful is that it's performant, user efficient, and reliable). That also makes it less useful as a concept, because it can be so difficult to measure for most problems. That said, usefulness is not always the most important quality. For example, if you can sell a product to a customer and get a one time payment of their money, it might not matter that the product has low usefulness.</td>
</tr>
<tr>
<td>Privacy</td>
<td>The extent to which a system prevents access to information that intended for a particular audience or use.</td>
</tr>
<tr>
<td>Consistency</td>
<td>The extent to which related functionality in a system leverages the same skills, rather than requiring new skills to learn how to use. For example, in Mac OS, quitting any application requires the same action: command-Q or the Quit menu item in the application menu; this is highly consistent. Other platforms that are less consistent allow applications to have many different ways of quitting applications.</td>
</tr>
<tr>
<td>Usability</td>
<td>This quality encompasses all of the qualities above. We use it as a holistic term to represent any quality that affects someone's ability to use a system.</td>
</tr>
<tr>
<td>Bias</td>
<td>The multiple ways in which software can discriminate, exclude, or amplify or reinforce discriminatory or exclusionary structures in society. For example, data used to train a classifier might used racially biased data, algorithms might use sexist assumptions about gender, web forms might systematically exclude non-Western names and language, and applications might be only accessible to people who can see or use a mouse.</td>
</tr>
</table>
<p>Although the lists above are not complete, you might have already noticed some tradeoffs between different qualities. A secure system is necessarily going to be less learnable, because there will be more to learn to operate it. A robust system will likely be less maintainable because it it will likely have more code to account for its diverse operating environments. Because one cannot achieve all software qualities, and achieving each quality takes significant time, it is necessary to prioritize qualities for each project.</p>
<p>These external notions of quality are not the only qualities that matter. For example, developers often view projects as successful if they offer intrinsically rewarding work (Procaccino et al. 2005). That may sound selfish, but if developers <em>aren't</em> enjoying their work, they're probably not going to achieve any of the qualities very well. Moreover, there are many organizational factors that can inhibit developers' ability to obtain these rewards. Project complexity, internal and external dependencies that are out of a developers control, process barriers, budget limitations, deadlines, poor HR planning, and pressure to ship can all interfere with project success (<a href="#lavallee">Lavallee & Robillard 2015</a>).</p>
<p>As I've noted before, the person most responsible for isolating developers from these organizational problems, and most responsible for prioritizing software qualities is a product manager. Check out the podcast below for one product manager's perspectives on the challenges of balancing these different priorities.</p>
<center class="lead"><a href="requirements.html">Next chapter: Requirements</a></center>
<h2>Further reading</h2>
<p id="boehm">Boehm, B.W. 1976. <a href="http://ieeexplore.ieee.org/document/1674590/" target="_blank">Software Engineering</a>, IEEE Transactions on Computers, 25(12), 1226-1241.</p>
<p id="grossman">Grossman, T., Fitzmaurice, G., & Attar, R. (2009, April). <a href="https://doi.org/10.1145/1518701.1518803">A survey of software learnability: metrics, methodologies and guidelines</a>. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 649-658).</p>
<p id="murphy">Emerson Murphy-Hill, Thomas Zimmermann, and Nachiappan Nagappan. 2014. <a href="http://dx.doi.org/10.1145/2568225.2568226" target="_blank">Cowboys, ankle sprains, and keepers of quality: how is video game development different from software development?</a> In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 1-11.</p>
<p id="procaccino">Procaccino, J. D., Verner, J. M., Shelfer, K. M., & Gefen, D. (2005). <a href="http://www.sciencedirect.com/science/article/pii/S0164121204002614" target="_blank">What do software practitioners really think about project success: an exploratory study</a>. Journal of Systems and Software, 78(2), 194-203.</p>
<p id="ralph">Paul Ralph and Paul Kelly. 2014. <a href="http://dx.doi.org/10.1145/2568225.2568261" target="_blank">The dimensions of software engineering success</a>. In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 24-35.</p>
<p id="lavallee">Mathieu Lavallee and Pierre N. Robillard. 2015. <a href="http://dl.acm.org/citation.cfm?id=2818754.2818837" target="_blank">Why good developers write bad code: an observational case study of the impacts of organizational factors on software quality</a>. In Proceedings of the 37th International Conference on Software Engineering - Volume 1 (ICSE '15), Vol. 1. IEEE Press, Piscataway, NJ, USA, 677-687.</p>
<p id="wobbrock">Wobbrock, J. O., Kane, S. K., Gajos, K. Z., Harada, S., & Froehlich, J. (2011). <a href="https://doi.org/10.1145/1952383.1952384">Ability-based design: Concept, principles and examples</a>. ACM Transactions on Accessible Computing (TACCESS), 3(3), 9.</p>
<h2>Podcasts</h2>
<p>Software Engineering Daily, <a href="https://softwareengineeringdaily.com/2017/01/18/product-management-with-suzie-prince/">Product Management with Suzie Prince</a></p>
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-10917999-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</body>
</html>

View file

@ -1,141 +1,6 @@
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Bootstrap requires jQuery -->
<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<!-- Load some Lora -->
<link href="https://fonts.googleapis.com/css2?family=Lora:ital,wght@0,400;0,700;1,400;1,700&display=swap" rel="stylesheet">
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<!-- Optional theme -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap-theme.min.css" integrity="sha384-rHyoN1iRsVXV4nD0JutlnGaslCJuC7uwjduW9SVrLvRYooPp2bWYgmgJQIXwl/Sp" crossorigin="anonymous">
<!-- Latest compiled and minified JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<link rel="stylesheet" href="style.css" />
<title>Requirements</title>
<meta http-equiv="refresh" content="0; URL=http://faculty.uw.edu/ajko/books/cooperative-software-development/#/requirements" />
</head>
<body>
<p><a href="index.html">Back to table of contents</a></p>
<img src="images/scaffolding.jpg" class="img-responsive" />
<small>Credit: public domain</small>
<h1>Requirements</h1>
<div class="lead">Amy J. Ko</div>
<p>Once you have a problem, a solution, and a design specification, it's entirely reasonable to start thinking about code. What libraries should we use? What platform is best? Who will build what? After all, there's no better way to test the feasibility of an idea than to build it, deploy it, and find out if it works. Right?</p>
<p>It depends. This mentality towards product design works fine with building and deploying something is cheap and getting feedback has no consequences. Simple consumer applications often benefit from this simplicity, especially early stage ones, because there's little to lose. But what if a beta isn't cheap to build? What if your product only has one shot at adoption? What if you're building something for a client and they want to define success? Worse yet, what if your product could <em>kill</em> people if it's not built properly? In these settings, software teams take an approach of translating a design into a specific explicit set of goals that must be satisfied in order for the implementation to be complete. We call these goals <b>requirements</b> and we call this process of <b>requirements engineering</b> (<a href="#sommerville">Sommerville & Sawyer 1997</a>).</p>
<p>
In principle, requirements are a relatively simple concept.
They are simply statements of what must be true about a system to make the system acceptable.
For example, suppose you were designing an interactive mobile game.
You might want to write the requirement <em>The frame rate must never drop below 60 frames per second.</em>
This could be important for any number of reasons: the game may rely on interactive speeds, your company's reputation may be for high fidelity graphics, or perhaps that high frame rate is key to creating a sense of realism.
Whatever the reasons, expressing it as a requirement makes it explicit that any version of the software that doesn't meet that requirement is unacceptable.
</p>
<p>
The general idea of writing down requirements is actually a controversial one.
Why not just discover what a system needs to do incrementally, through testing, user feedback, and other methods?
Some of the original arguments for writing down requirements actually acknowledged that software is necessarily built incrementally, but that it is nevertheless useful to write down requirements from the outset (<a href="#parnas">Parnas and Clements 1986</a>).
This is because requirements help you plan everything: what you have to build, what you have to test, and how to know when you're done.
The theory is that by defining requirements explicitly, you plan, and by planning, you save time.
</p>
<p>
Do you really have to plan by <em>writing down</em> requirements?
For example, why not do what designers do, expressing requirements in the form of prototypes and mockups.
These <em>implicitly</em> state requirements, because they suggest what the software is supposed to do without saying it directly.
But for some types of requirements, they actually imply nothing.
For example, how responsive should a web page be to be? A prototype doesn't really say; an explicit requirement of <em>an average page load time of less than 1 second</em> is quite explicit.
Requirements can therefore be thought of more like an architect's blueprint: they provide explicit definitions and scaffolding of project success.
</p>
<p>
And yet, like design, requirements come from the world and the people in it and not from software (<a href="#jackson">Jackson 2001</a>).
Because they come from the world, requirements are rarely objective or unambiguous.
For example, some requirements come from law, such as the European Union's General Data Protection Regulation (<a href="https://eugdpr.org/">GDPR</a>) regulation, which specifies a set of data privacy requirements that all software systems used by EU citizens must meet.
Other requirements might come from public pressure for change, as in Twitter's decision to label particular tweets as having false information or hate speech.
Therefore, the methods that people use to do requirements engineering are quite diverse.
Requirements engineers may work with lawyers to interpret policy.
They might work with regulators to negotiate requirements.
They might also use design methods, such as user research methods and rapid prototyping to iteratively converge toward requirements (<a href="#lamsweerde">Lamsweerd 2008</a>).
Therefore, the big difference between design and requirements engineering is that requirements engineers take the process one step further than designers, enumerating <em>in detail</em> every property that the software must satisfy, and engaging with every source of requirements a system might need to meet, not just user needs.
</p>
<p>
There are some approaches to specifying requirements <em>formally</em>.
These techniques allow requirements engineers to automatically identify <em>conflicting</em> requirements, so they don't end up proposing a design that can't possibly exist.
Some even use systems to make requirements "traceable", meaning the high level requirement can be linked directly to the code that meets that requirement (<a href="#mader">Mader & Egyed 2015</a>).
All of this formality has tradeoffs: not only does it take more time to be so precise, but it can negatively effect creativity in concept generation as well (<a href="#mohanani">Mohanani et al. 2014</a>).
</p>
<p>Expressing requirements in natural language can mitigate these effects, at the expense of precision. They just have to be <em>complete</em>, <em>precise</em>, <em>non-conflicting</em>, and <em>verifiable</em>. For example, consider a design for a simple <strong>to do list</strong> application. It's requirements might be something like the following:</p>
<ul>
<li>Users must be able to add to do list items with a single action.</li>
<li>To do list items must consist of text and a binary completed state.</li>
<li>Users must be able to edit to do list item text.</li>
<li>Users must be able to toggle the completed state.</li>
<li>Users must be able to delete to do list items.</li>
<li>All edits to do list item state must save without user intervention.</li>
</ul>
<p>Let's review these requirements against the criteria for good requirements that I listed above:</p>
<ul>
<li>Is it <strong>complete</strong>? I can think of a few more requirements: is the list ordered? How long does state persist? Are there user accounts? Where is data stored? What does it look like? What kinds of user actions must be supported? Is delete undoable? Even just on these completeness dimension, you can see how even a very simple application can become quite complex. When you're generating requirements, your job is to make sure you haven't forgotten important requirements.</li>
<li>Is the list <strong>precise</strong>? Not really. When you add a to do list item, is it added at the beginning? The end? Wherever a user request it be added? How long can the to do list item text be? Clearly the requirement above is ambiguous.</li>
<li>Are the requirements <strong>non-conflicting</strong>? I <em>think</em> they are since they all seem to be satisfiable together. But some of the missing requirements might conflict: we can't know until we're sure our list is relatively complete.</li>
<li>Finally, are they <strong>verifiable</strong>? Some more than others. Is there a way to guarantee that the state saves successfully all the time? That may be difficult to prove given the vast number of ways the operating environment might prevent saving.</li>
</ul>
<p>Now, the flaws above don't make the requirements "wrong". They just make them "less good." The more complete, precise, non-conflicting, and testable your requirements are, the easier it is to anticipate risk, estimate work, and evaluate progress, since requirements essentially give you a to do list for building and testing your code.</p>
<center class="lead"><a href="architecture.html">Next chapter: Architecture</a></center>
<h2>Further reading</h2>
<small>
<p id="jackson">Jackson, Michael (2001). <a href="https://books.google.com/books?id=8fqIP83Q2IAC" target="_blank">Problem Frames</a>. Addison-Wesley.</p>
<p id="lamsweerde">Axel van Lamsweerde. 2008. <a href="http://dx.doi.org/10.1145/1453101.1453133" target="_blank">Requirements engineering: from craft to discipline</a>. In Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering (SIGSOFT '08/FSE-16). ACM, New York, NY, USA, 238-249.</p>
<p id="mader">M&aumlder, P., & Egyed, A. (2015). <a href="https://doi.org/10.1007/s10664-014-9314-z" target="_blank">Do developers benefit from requirements traceability when evolving and maintaining a software system?</a> Empirical Software Engineering, 20(2), 413-441.</p>
<p id="mohanani">Rahul Mohanani, Paul Ralph, and Ben Shreeve. 2014. <a href="http://dx.doi.org/10.1145/2568225.2568235" target="_blank">Requirements fixation</a>. In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 895-906.</p>
<p id="parnas">Parnas, D. L., & Clements, P. C. (1986). <a href="https://doi.org/10.1109/TSE.1986.6312940">A rational design process: How and why to fake it</a>. IEEE Transactions on Software Engineering, (2), 251-257.</p>
<p id="sommerville">Sommerville, I., & Sawyer, P. (1997). <a href="https://books.google.com/books?id=5NnP-VODEc8C" target="_blank">Requirements engineering: a good practice guide</a>. John Wiley & Sons, Inc.</p>
</small>
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-10917999-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</body>
</html>

View file

@ -1,146 +1,6 @@
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Bootstrap requires jQuery -->
<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<!-- Load some Lora -->
<link href="https://fonts.googleapis.com/css2?family=Lora:ital,wght@0,400;0,700;1,400;1,700&display=swap" rel="stylesheet">
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<!-- Optional theme -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap-theme.min.css" integrity="sha384-rHyoN1iRsVXV4nD0JutlnGaslCJuC7uwjduW9SVrLvRYooPp2bWYgmgJQIXwl/Sp" crossorigin="anonymous">
<!-- Latest compiled and minified JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<link rel="stylesheet" href="style.css" />
<title>Specifications</title>
<meta http-equiv="refresh" content="0; URL=http://faculty.uw.edu/ajko/books/cooperative-software-development/#/specifications" />
</head>
<body>
<p><a href="index.html">Back to table of contents</a></p>
<img src="images/blueprint.jpg" class="img-responsive" />
<small>Credit: public domain</small>
<h1>Specifications</h1>
<div class="lead">Amy J. Ko</div>
<p>When you make something with code, you're probably used to figuring out a design as you go. You write a function, you choose some arguments, and if you don't like what you see, perhaps you add a new argument to that function and test again. This <a href="https://en.wikipedia.org/wiki/Cowboy_coding" target="_blank">cowboy coding</a> as some people like to call it can be great fun! It allows systems to emerge more organically, as you iteratively see your front-end design emerge, the design of your implementation emerges too, co-evolving with how you're feeling about the final product.</p>
<p>As you've probably noticed by now, this type of process doesn't really scale, even when you're working with just a few other people. That argument you added? You just broke a bunch of functions one of your teammates was planning and when she commits her code, now she gets merge conflicts, which cost her an hour to fix because she has to catch up to whatever design change you made. This lack of planning quickly turns into an uncoordinated mess of individual decision making. Suddenly you're spending all of your time cleaning up coordination messes instead of writing code.</p>
<p>The techniques we've discussed so far for avoiding this boil down to <em>specifying</em> what code should do, so everyone can write code according to a plan. We've talked about <a href="requirements.html">requirements specifications</a>, which are declarations of what software must do from a users' perspective. We've also talked about <a href="architecture.html">architectural specifications</a>, which are high-level declarations of how code will be organized, encapsulated, and coordinated. At the lowest level are <b>functional specifications</b>, which are declarations about the <em>properties of input and output of functions in a program</em>.
<p>In their simplest form, a functional specification can be a just some natural language that says what an individual function is supposed to do:</p>
<pre>
// Return the smaller of the two numbers, or if they're equal, the second number.
function min(a, b) {
return a < b ? a : b;
}
</pre>
<p>This comment achieves the core purpose of a specification: to help other developers understand what the requirements and intended behavior of a function are. As long as everyone sticks to this "plan" (everyone calls the function with only numbers and the function always returns the smaller of them), then there shouldn't be any problems.</p>
<p>The comment above is okay, but it's not very precise. It says what is returned and what properties it has, but it only implies that numbers are allowed without saying anything about what kind of numbers. Are decimals allowed or just integers? What about not-a-number (the result of dividing 1 by 0). Or infinity?</p>
<p>To make these clearer, many languages use <b>static typing</b> to allow developers to specify types explicitly:</p>
<pre>
// Return the smaller of the two integers, or if they're equal, the second number.
function min(int a, int b) {
return a < b ? a : b;
}
</pre>
<p>Because an <code>int</code> is well-defined in most languages, the two inputs to the function are well-defined.</p>
<p>Of course, if the above was JavaScript code (which doesn't support static typing), JavaScript does nothing to actually verify that the data given to <code>min()</code> are actually integers. It's entirely fine with someone sending a string and an object. This probably won't do what you intended, leading to defects.</p>
<p>This brings us to a second purpose of writing functional specifications: to help <em>verify</em> that functions, their input, and their output are correct. Tests of functions and other low-level procedures are called <strong>unit tests</strong>. There are many ways to use specifications to verify correctness. By far, one of the simplest and most widely used kinds of unit tests are <b>assertions</b> (<a href="#clarke">Clarke & Rosenblum 2006</a>). Assertions consist of two things: 1) a check on some property of a function's input or output and 2) some action to notify about violations of these properties. For example, if we wanted to verify that the JavaScript function above had integer values as inputs, we would do this:</p>
<pre>
// Return the smaller of the two numbers, or if they're equal, the second number.
function min(a, b) {
if(!Number.isInteger(a)) alert("First input to min() isn't an integer!");
if(!Number.isInteger(b)) alert("Second input to min() isn't an integer!");
return a < b ? a : b;
}
</pre>
<p>These two new lines of code are essentially functional specifications that declare "<em>If either of those inputs is not an integer, the caller of this function is doing something wrong</em>". This is useful to declare, but assertions have a bunch of problems: if your program <em>can</em> send a non-integer value to min, but you never test it in a way that does, you'll never see those alerts. This form of <strong>dynamic verification</strong> is therefore very limited, since it provides weaker guarantees about correctness. That said, a study of the use of assertions in a large database of GitHub projects shows that use of assertions <em>is</em> related to fewer defects (<a hef="#casalnuovo">Casalnuovo et al. 2015</a>) (though note that I said "related": we have no evidence that assertions actually prevent defects. It may be possible that developers who use assertions are just better at avoiding defects.)</p>
<p>Assertions are related to the broader category of <strong>error handling</strong> language features. Error handling includes assertions, but also programming language features like exceptions and exception handlers. Error handling is a form of specification in that <em>checking</em> for errors usually entails explicitly specifying the conditions that determine an error. For example, in the code above, the condition <code>Number.isInteger(a)</code> specifies that the parameter <code>a</code> must be an integer. Other exception handling code such as the Java <code>throws</code> statement indicates the cases in which errors can occur and the corresponding <code>catch</code> statement indicates what is to done about errors. It is difficult to implement good exception handling that provides granular, clear ways of recovering from errors (<a href="#chen">Chen et al. 2009</a>). Evidence shows that modern developers are still exceptionally bad at designing for errors; one study found that errors are not designed for, few errors are tested for, and exception handling is often overly general, providing little ability for users to understand errors or for developers to debug them (<a href="#ebert">Ebert et al. 2015</a>). These difficulties appear to be because it is difficult to imagine the vast range of errors that can occur (<a href="#maxion">Maxion & Olszewski 2000</a>).</p>
<p>Researchers have invented many forms of specification that require more work and more thought to write, but can be used to make stronger guarantees about correctness (<a href="#woodcock">Woodcock et al. 2009</a>). For example, many languages support the expression of formal <strong>pre-conditions</strong> and <strong>post-conditions</strong> that represent contracts that must be kept for the program to be corect. (<strong>Formal</strong> means mathematical, facilitating mathematical proofs that these conditions are met). Because these contracts are essentially mathematical promises, we can build tools that automatically read a function's code and verify that what it computes exhibits those mathematical properties using automated theorem proving systems. For example, suppose we wrote some formal specifications for our example above to replace our assertions (using a fictional notation for illustration purposes):</p>
<pre>
// pre-conditions: a in Integers, b in Integers
// post-conditions: result <= a and result <= b
function min(a, b) {
return a < b ? a : b;
}
</pre>
<p>The annotations above require that, no matter what, the inputs have to be integers and the output has to be less than or equal to both values. The automatic theorem prover can then start with the claim that result is always less than or equal to both and begin searching for a counterexample. Can you find a counterexample? Really try. Think about what you're doing while you try: you're probably experimenting with different inputs to identify arguments that violate the contract. That's similar to what automatic theorem provers do, but they use many tricks to explore large possible spaces of inputs all at once, and they do it very quickly.</p>
<p>
There are definite tradeoffs with writing detailed, formal specifications.
The benefits are clear: many companies have written formal functional specifications in order to make <em>completely</em> unambiguous the required behavior of their code, particularly systems that are capable of killing people or losing money, such as flight automation software, banking systems, and even compilers that create executables from code (<a href="#woodcock">Woodcock et al. 2009</a>).
In these settings, it's worth the effort of being 100% certain that the program is correct because if it's not, people can die.
Specifications can have other benefits.
The very act of writing down what you expect a function to do in the form of test cases can slow developers down, causing to reflect more carefully and systematically about exactly what they expect a function to do (<a href="#fucci">Fucci et al. 2016</a>).
Perhaps if this is true in general, there's value in simply stepping back before you write a function, mapping out pre-conditions and post-conditions in the form of simple natural language comments, and <em>then</em> writing the function to match your intentions.
</p>
<p>
Writing formal specifications can also have downsides.
When the consequences of software failure aren't so high, the difficulty and time required to write and maintain functional specifications may not be worth the effort (<a href="#petre">Petre 2013</a>).
These barriers deter many developers from writing them (<a href="#schiller">Schiller et al. 2014</a>).
Formal specifications can also warp the types of data that developers work with.
For example, it is much easier to write formal specifications about Boolean values and integers than string values.
This can lead engineers to be overly reductive in how they model data (e.g., settling for binary models of gender, then gender is inherently non-binary and multidimensional).
</p>
<center class="lead"><a href="process.html">Next chapter: Process</a></center>
<h2>Further reading</h2>
<p id="casalnuovo">Casey Casalnuovo, Prem Devanbu, Abilio Oliveira, Vladimir Filkov, and Baishakhi Ray. 2015. <a href="http://dl.acm.org/citation.cfm?id=2818846" target="_blank">Assert use in GitHub projects</a>. In Proceedings of the 37th International Conference on Software Engineering - Volume 1 (ICSE '15), Vol. 1. IEEE Press, Piscataway, NJ, USA, 755-766.</p>
<p id="chen">Chen, Chien-Tsun, Yu Chin Cheng, Chin-Yun Hsieh, and I-Lang Wu. "<a href="http://www.sciencedirect.com/science/article/pii/S0164121208001714" target="_blank">Exception handling refactorings: Directed by goals and driven by bug fixing</a>." Journal of Systems and Software 82, no. 2 (2009): 333-345.</p>
<p id="clarke">Clarke, L. A., & Rosenblum, D. S. (2006). <a href="http://dl.acm.org/citation.cfm?id=1127900" target="_blank">A historical perspective on runtime assertion checking in software development</a>. ACM SIGSOFT Software Engineering Notes, 31(3), 25-37.</p>
<p id="ebert">Ebert, F., Castor, F., and Serebrenik, A. (2015). <a href="http://www.sciencedirect.com/science/article/pii/S0164121215000862" target="_blank">An exploratory study on exception handling bugs in Java programs</a>." Journal of Systems and Software, 106, 82-101.</p>
<p id="fucci">Fucci, D., Erdogmus, H., Turhan, B., Oivo, M., & Juristo, N. (2016). <a href="http://ieeexplore.ieee.org/document/7592412/" target="_blank">A Dissection of Test-Driven Development: Does It Really Matter to Test-First or to Test-Last?</a>. IEEE Transactions on Software Engineering.</p>
<p id="maxion">Maxion, Roy A., and Robert T. Olszewski. <a href="http://ieeexplore.ieee.org/document/877848/" target="_blank">Eliminating exception handling errors with dependability cases: a comparative, empirical study</a>." IEEE Transactions on Software Engineering 26, no. 9 (2000): 888-906.</p>
<p id="schiller">Schiller, T. W., Donohue, K., Coward, F., & Ernst, M. D. (2014, May). <a href="http://dl.acm.org/citation.cfm?id=2568285" target="_blank">Case studies and tools for contract specifications</a>. In Proceedings of the 36th International Conference on Software Engineering (pp. 596-607).</a>
<p id="petre">Marian Petre. 2013. <a href="http://dl.acm.org/citation.cfm?id=2486883" target="_blank">UML in practice</a>. In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 722-731.</p>
<p id="woodcock">Jim Woodcock, Peter Gorm Larsen, Juan Bicarregui, and John Fitzgerald. 2009. <a href="http://dx.doi.org/10.1145/1592434.1592436" target="_blank">Formal methods: Practice and experience</a>. ACM Comput. Surv. 41, 4, Article 19 (October 2009), 36 pages.</p>
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-10917999-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</body>
</html>

View file

@ -1,175 +1,6 @@
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Bootstrap requires jQuery -->
<script src="https://code.jquery.com/jquery-3.2.1.min.js" integrity="sha256-hwg4gsxgFZhOsEEamdOYGBf13FyQuiTwlAQgxVSNgt4=" crossorigin="anonymous"></script>
<!-- Load some Lora -->
<link href="https://fonts.googleapis.com/css2?family=Lora:ital,wght@0,400;0,700;1,400;1,700&display=swap" rel="stylesheet">
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
<!-- Optional theme -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap-theme.min.css" integrity="sha384-rHyoN1iRsVXV4nD0JutlnGaslCJuC7uwjduW9SVrLvRYooPp2bWYgmgJQIXwl/Sp" crossorigin="anonymous">
<!-- Latest compiled and minified JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
<link rel="stylesheet" href="style.css" />
<title>Verification</title>
<meta http-equiv="refresh" content="0; URL=http://faculty.uw.edu/ajko/books/cooperative-software-development/#/verification" />
</head>
<body>
<p><a href="index.html">Back to table of contents</a></p>
<img src="images/check.png" class="img-responsive" />
<small>Credit: public domain</small>
<h1>Verification</h1>
<div class="lead">Amy J. Ko</div>
<p>How do you know a program does what you intended?</p>
<p>Part of this is being clear about what you intended (by writing <a href="specifications.html">specifications</a>, for example), but your intents, however clear, are not enough: you need evidence that your intents were correctly expressed computationally. To get this evidence, we do <strong>verification</strong>.</p>
<p>There are many ways to verify code. A reasonable first instinct is to simply run your program. After all, what better way to check whether you expressed your intents then to see with your own eyes what your program does? This is an empirical approach is called <strong>testing</strong>. Some testing is <em>manual</em>, in that a human executes a program and verifies that it does what was intended. Some testing is <em>automated</em>, in that the test is run automatically by a computer. Another way to verify code is to <strong>analyze</strong> it, using logic to verify its correct operation. As with testing, some analysis is <em>manual</em>, since humans do it. We call this manual analysis <em>inspection</em>, whereas other analysis is <em>automated</em>, since computers do it. We call this <em>program analysis</em>. This leads to a nice complementary set of verification technique along two axes:</p>
<table class="table table-striped">
<tr>
<th></th>
<th>manual</th>
<th>automatic</th>
</tr>
<tr>
<th>empirical</th>
<td>manual testing</td>
<td>automated testing</td>
</tr>
<tr>
<th>analytical</th>
<td>inspection</td>
<td>program analysis</td>
</tr>
</table>
<p>To discuss each of these and their tradeoffs, first we have to cover some theory about verification. The first and simplest ideas are some terminology:</p>
<ul>
<li>A <strong>defect</strong> is some subset of a program's code that exhibits behavior that violates a program's specifications. For example, if a program was supposed to sort a list of numbers in increasing order and print it to a console, but a flipped inequality in the sorting algorithm made it sort them in decreasing order, the flipped inequality is the defect.</li>
<li>A <strong>failure</strong> is the program behavior that results from a defect executing. In our sorting example, the failure is the incorrectly sorted list printed on the console.</li>
<li>A <strong>bug</strong> vaguely refers to either the defect, the failure, or both. When we say "bug", we're not being very precise, but it is a popular shorthand for a defect and everything it causes.</li>
</ul>
<p>Note that because defects are defined relative to <em>intent</em>, whether a behavior is a failure depends entirely the definition of intent. If that intent is vague, whether something is a defect is vague. Moreover, you can define intents that result in behaviors that seem like failures: for example, I can write a program that intentionally crashes. A crash isn't a failure if it was intended! This might be pedantic, but you'd be surprised how many times I've seen professional developers in bug triage meetings say:</p>
<p><em>"Well, it's worked this way for a long time, and people have built up a lot of workarounds for this bug. It's also really hard to fix. Let's just call this by design. Closing this bug as won't fix."</em></p>
<h2>Testing</h2>
<p>So how do you <em>find</em> defects in a program? Let's start with testing. Testing is generally the easiest kind of verification to do, but as a practice, it has questionable efficacy. Empirical studies of testing find that it <em>is</em> related to fewer defects in the future, but not strongly related, and it's entirely possible that it's not the testing itself that results in fewer defects, but that other activities (such as more careful implementation) result in fewer defects and testing efforts (<a href="#ahmed">Ahmed et al. 2016</a>). At the same time, modern developers don't test as much as they think they do (<a href="#beller">Beller et al. 2015</a>). Moreover, students are often not convinced of the return on investment of automated tests and often opt for laborious manual tests (even though they regret it later) (<a href="#pham">Pham et al. 2014</a>). Testing is therefore in a strange place: it's a widespread activity in industry, but it's often not executed systematically, and there is some evidence that it doesn't seem to help prevent defects from being released.</p>
<p>Why is this? One possibility is that <strong>no amount of testing can prove a program correct with respect to its specifications</strong>. Why? It boils down to the same limitations that exist in science: with empiricism, we can provide evidence that a program <em>does</em> have defects, but we can't provide complete evidence that a program <em>doesn't</em> have defects. This is because even simple programs can execute in a infinite number of different ways.</p>
<p>Consider this JavaScript program:</p>
<pre>
function count(input) {
while(input > 0)
input--;
return input;
}</pre>
<p>The function should always return 0, right? How many possible values of <code>input</code> do we have to try manually to verify that it always does? Well, if <code>input</code> is an integer, then there are 2<sup>32</sup> possible integer values, because JavaScript uses 32-bits to represent an integer. That's not infinite, but that's a lot. But what if <code>input</code> is a string? There are an infinite number of possible strings because they can have any sequence of characters of any length. Now we have to manually test an infinite number of possible inputs. So if we were restricting ourselves to testing, we will never know that the program is correct for all possible inputs. In this case, automatic testing doesn't even help, since there are an infinite number of tests to run.</p>
<p>There are some ideas in testing that can improve how well we can find defects. For example, rather than just testing the inputs you can think of, focus on all of the lines of code in your program. If you find a set of tests that can cause all of the lines of code to execute, you have one notion of <strong>test coverage</strong>. Of course, lines of code aren't enough, because an individual line can contain multiple different paths in it (e.g., <code>value ? getResult1() : getResult2()</code>). So another notion of coverage is executing all of the possible <em>control flow paths</em> through the various conditionals in your program. Executing <em>all</em> of the possible paths is hard, of course, because every conditional in your program doubles the number of possible paths (you have 200 if statements in your program? That's up to 2<sup>200</sup> possible paths, which is more paths than there are <a href="https://en.wikipedia.org/wiki/Observable_universe#Matter_content">atoms in the universe</a>).</p>
<p>There are many types of testing that are common in software engineering:</p>
<ul>
<li><strong>Unit tests</strong> verify that functions return the correct output. For example, a program that implemented a function for finding the day of the week for a given date might also include unit tests that verify for a large number of dates that the correct day of the week is returned. They're good for ensuring widely used low-level functionality is correct.</li>
<li><strong>Integration tests</strong> verify that when all of the functionality of a program is put together into the final product, it behaves according to specifications. Integration tests often operate at the level of user interfaces, clicking buttons, entering text, submitting forms, and verifying that the expected feedback always occurs. Integration tests are good for ensuring that important tasks that users will perform are correct.</li>
<li><strong>Regression tests</strong> verify that behavior that previously worked doesn't stop working. For example, imagine you find a defect that causes logins to fail; you might write a test that verifies that this cause of login failure does not occur, in case someone breaks the same functionality again, even for a different reason. Regression tests are good for ensuring that you don't break things when you make changes to your application.</li>
</ul>
<p>Which tests you should write depends on what risks you want to take. Don't care about failures? Don't write any tests. If failures of a particular kind are highly consequential to your team, you should probably write tests that check for those failures. As we noted above, you can't write enough tests to catch all bugs, so deciding which tests to write and maintain is a key challenge.</p>
<h2>Analysis</h2>
<p>Now, you might be thinking that it's obvious that the program above is defective for some integers and strings. How did you know? You <em>analyzed</em> the program rather than executing it with specific inputs. For example, when I read (analyzed) the program, I thought:</p>
<p><em>"if we assume <code>input</code> is an integer, then there are only three types of values to meaningfully consider with respect to the <code>&gt;</code> in the loop condition: positive, zero, and negative. Positive numbers will always decrement to 0 and return 0. Zero will return zero. And negative numbers just get returned as is, since they're less then zero, which is wrong with respect to the specification. And in JavaScript, strings are never greater than 0 (let's not worry about whether it even makes sense to be able to compare strings and numbers), so the string is returned, which is wrong."</em></p>
<p>The above is basically an informal proof. I used logic to divide the possible states of <code>input</code> and their effect on the program's behavior. I used <strong>symbolic execution</strong> to verify all possible paths through the function, finding the paths that result in correct and incorrect values. The strategy was an inspection because we did it manually. If we had written a <em>program</em> that read the program to perform this proof automatically, we would have called it <em>program analysis</em>.</p>
<p>The benefits of analysis is that it <em>can</em> demonstrate that a program is correct in all cases. This is because they can handle infinite spaces of possible inputs by mapping those infinite inputs onto a finite space of possible executions. It's not always possible to do this in practice, since many kinds of programs <em>can</em> execute in infinite ways, but it gets us closer to proving correctness.</p>
<p>One popular type of automatic program analysis tools is a <strong>static analysis</strong> tool. These tools read programs and identify potential defects using the types of formal proofs like the ones above. They typically result in a set of warnings, each one requiring inspection by a developer to verify, since some of the warnings may be false positives (something the tool thought was a defect, but wasn't). Although static analysis tools can find many kinds of defects, they aren't yet viewed by developers to be that useful because the false positives are often large in number and the way they are presented make them difficult to understand (<a href="#johnson">Johnson et al. 2013</a>). There is one exception to this, and it's a static analysis tool you've likely used: a compiler. Compilers verify the correctness of syntax, grammar, and for statically-typed languages, the correctness of types. As I'm sure you've discovered, compiler errors aren't always the easiest to comprehend, but they do find real defects automatically. The research community is just searching for more advanced ways to check more advanced specifications of program behavior.</p>
<p>
Not all analytical techniques rely entirely on logic.
In fact, one of the most popular methods of verification in industry are <strong>code reviews</strong>, also known as <em>inspections</em>.
The basic idea of an inspection is to read the program analytically, following the control and data flow inside the code to look for defects.
This can be done alone, in groups, and even included as part of process of integrating changes, to verify them before they are committed to a branch.
Modern code reviews, while informal, help find defects, stimulate knowledge transfer between developers, increase team awareness, and help identify alternative implementations that can improve quality (<a href="#bacchelli">Bacchelli & Bird 2013</a>).
One study found that measures of how much a developer knows about an architecture can increase 66% to 150% depending on the project (<a href="#rigby2">Rigby & Bird 2013</a>).
That said, not all reviews are created equal: the best ones are thorough and conducted by a reviewer with strong familiarity with the code (<a href="#kononenko">Kononenko et al. 2016</a>); including reviewers that do not know each other or do not know the code can result in longer reviews, especially when run as meetings (<a href="#seaman">Seaman & Basili 1997</a>).
Soliciting reviews asynchronously by allowing developers to request reviewers of their peers is generally much more scalable (<a href="#rigby">Rigby & Storey 2011</a>), but this requires developers to be careful about which reviews they invest in.
These choices about where to put reviewing attention can result in great disparities in what is reviewed, especially in open source: the more work a review is perceived to be, the less likely it is to be reviewed at all and the longer the delays in receiving a review (<a href="#thongtanunam">Thongtanunam et al. 2018</a>)
</p>
<p>
Beyond these more technical considerations around verifying a program's correctness are organizational issues around different software qualities.
For example, different organizations have different sensitivities to defects.
If a $0.99 game on the app store has a defect, that might not hurt its sales much, unless that defect prevents a player from completing the game.
If Boeing's flight automation software has a defect, hundreds of people might die.
The game developer might do a little manual play testing, release, and see if anyone reports a defect.
Boeing will spend years proving mathematically with automatic program analysis that every line of code does what is intended, and repeating this verification every time a line of code changes.
Moreover, requirements may change differently in different domains.
For example, a game company might finally recognize the sexist stereotypes amplified in its game mechanics and have to change requirements, resulting in changed definitions of correctness, and the incorporation of new software qualities such as bias into testing plans.
Similarly, Boeing might have to respond to pandemic fears by having to shift resources away from verifying flight crash safety to verifying public health safety.
What type of verification is right for your team depends entirely on what a team is building, who's using it, and how they're depending on it.
</p>
<center class="lead"><a href="monitoring.html">Next chapter: Monitoring</a></center>
<h2>Further reading</h2>
<small>
<p id="ahmed">Iftekhar Ahmed, Rahul Gopinath, Caius Brindescu, Alex Groce, and Carlos Jensen. 2016. <a href="https://doi.org/10.1145/2950290.2950324" target="_blank">Can testedness be effectively measured?</a> In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2016). ACM, New York, NY, USA, 547-558.</p>
<p id="bacchelli">Alberto Bacchelli and Christian Bird. 2013. <a href="http://dl.acm.org/citation.cfm?id=2486882" target="_blank">Expectations, outcomes, and challenges of modern code review</a>. In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 712-721.</p>
<p id="beller">Moritz Beller, Georgios Gousios, Annibale Panichella, and Andy Zaidman. 2015. <a href="https://doi.org/10.1145/2786805.2786843" target="_blank">When, how, and why developers (do not) test in their IDEs</a>. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). ACM, New York, NY, USA, 179-190.</p>
<p id="johnson">Brittany Johnson, Yoonki Song, Emerson Murphy-Hill, and Robert Bowdidge. 2013. <a href="http://ieeexplore.ieee.org/abstract/document/6606613" target="_blank">Why don't software developers use static analysis tools to find bugs?</a> In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 672-681.</p>
<p id="kononenko">Oleksii Kononenko, Olga Baysal, and Michael W. Godfrey. 2016. <a href="https://doi.org/10.1145/2884781.2884840" target="_blank">Code review quality: how developers see it</a>. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 1028-1038.</p>
<p id="pham">Raphael Pham, Stephan Kiesling, Olga Liskin, Leif Singer, and Kurt Schneider. 2014. <a href="http://dx.doi.org/10.1145/2635868.2635925" target="_blank">Enablers, inhibitors, and perceptions of testing in novice software teams</a>. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2014). ACM, New York, NY, USA, 30-40.</p>
<p id="rigby">Peter C. Rigby and Margaret-Anne Storey. 2011. <a href="https://doi.org/10.1145/1985793.1985867" target="_blank">Understanding broadcast based peer review on open source software projects</a>. In Proceedings of the 33rd International Conference on Software Engineering (ICSE '11). ACM, New York, NY, USA, 541-550.</p>
<p id="rigby2">Peter C. Rigby and Christian Bird. 2013. <a href="http://dx.doi.org/10.1145/2491411.2491444" target="_blank">Convergent contemporary software peer review practices</a>. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2013). ACM, New York, NY, USA, 202-212.</p>
<p id="seaman">Carolyn B. Seaman and Victor R. Basili. 1997. <a href="http://dx.doi.org/10.1145/253228.253248" target="_blank">An empirical study of communication in code inspections</a>. In Proceedings of the 19th international conference on Software engineering (ICSE '97). ACM, New York, NY, USA, 96-106.</p>
<p id="thongtanunam">Thongtanunam, P., McIntosh, S., Hassan, A. E., & Iida, H. (2016). <a href="https://doi.org/10.1007/s10664-016-9452-6">Review participation in modern code review: An empirical study of the Android, Qt, and OpenStack projects</a>. Empirical Software Engineering.</p>
</small>
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-10917999-1']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
</body>
</html>