From 858970c79aad6e3de0c8d5d98ee5d288327c6412 Mon Sep 17 00:00:00 2001 From: Andy Ko Date: Tue, 16 Apr 2019 14:26:38 -0700 Subject: [PATCH 1/4] Elaborated on components and connectors. --- architecture.html | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/architecture.html b/architecture.html index d4943cb..9b64e61 100644 --- a/architecture.html +++ b/architecture.html @@ -45,7 +45,9 @@

Architectural styles come in all shapes and sizes. Some are smaller design patterns of information sharing (Beck et al. 2006), whereas others are ubiquitous but specialized patterns such as the architectures required to support undo and cancel in user interfaces (Bass et al. 2004).

-

One fundamental unit of which an architecture is composed is a component. This is basically a word that refers to any abstraction—any code, really—that attempts to encapsulate some well defined functionality or behavior separate from other functionality and behavior. Components have interfaces that decide how it can communicate with other components. It might be a class, a data structure, a set of functions, a library, or even something like a web service. All of these are abstractions that encapsulate interrelated computation and state. The second fundamental unit of architecture is connectors. Connectors are abstractions (code) that transmit information between components. They're brokers that connect components, but do not necessarily have meaningful behaviors or states of their own. Connectors can be things like function calls, web service API calls, events, requests, and so on.

+

One fundamental unit of which an architecture is composed is a component. This is basically a word that refers to any abstraction—any code, really—that attempts to encapsulate some well defined functionality or behavior separate from other functionality and behavior. For example, consider the Java class Math: it encapsulates a wide range of related mathematical functions. This class has an interface that decide how it can communicate with other components (sending arguments to a math function and getting a return value). Components can be more than classes though: they might be a data structure, a set of functions, a library, an API, or even something like a web service. All of these are abstractions that encapsulate interrelated computation and state for some well-define purpose.

+ +

The second fundamental unit of architecture is connectors. Connectors are code that transmit information between components. They're brokers that connect components, but do not necessarily have meaningful behaviors or states of their own. Connectors can be things like function calls, web service API calls, events, requests, and so on. None of these mechanisms store state or functionality themselves; instead, they are the things that tie components functionality and state together.

Even with carefully selected architectures, systems can still be difficult to put together, leading to architectural mismatch (Garlan et al. 1995). When mismatch occurs, connecting two styles can require dramatic amounts of code to connect, imposing significant risk of defects and cost of maintenance. One common example of mismatches occurs with the ubiquitous use of database schemas with client/server web-applications. A single change in a database schema can often result in dramatic changes in an application, as every line of code that uses that part of the scheme either directly or indirectly must be updated (Qiu et al. 2013). This kind of mismatch occurs because the component that manages data (the database) and the component that renders data (the user interface) is highly "coupled" with the database schema: the user interface needs to know a lot about the data, its meaning, and its structure in order to render it meaningfully.

From 560668741bd3516347cc84dfa578ac0a19f5080d Mon Sep 17 00:00:00 2001 From: Andy Ko Date: Tue, 16 Apr 2019 14:37:47 -0700 Subject: [PATCH 2/4] Clarified automatic theorem provers. --- specifications.html | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/specifications.html b/specifications.html index 61a4671..4894e71 100644 --- a/specifications.html +++ b/specifications.html @@ -70,11 +70,11 @@ function min(a, b) { } -

These two new lines of code are essentially functional specifications that declare "If either of those inputs is not an integer, the caller of this function is doing something wrong". This is useful to declare, but assertions have a bunch of problems: if your program can send a non-integer value to min, but you never test it in a way that does, you'll never see those alerts. This form of dynamic verification is therefore very limited, since it provides weaker guarantees about correctness. That said, a study of the use of assertions in a large database of GitHub projects shows that use of assertions is related to fewer defects (Casalnuovo et al. 2015) (though note that I said "related": we have no evidence that assertions actually prevent defects. It may be possible that developers who use assertions are just better at avoiding defects.)

+

These two new lines of code are essentially functional specifications that declare "If either of those inputs is not an integer, the caller of this function is doing something wrong". This is useful to declare, but assertions have a bunch of problems: if your program can send a non-integer value to min, but you never test it in a way that does, you'll never see those alerts. This form of dynamic verification is therefore very limited, since it provides weaker guarantees about correctness. That said, a study of the use of assertions in a large database of GitHub projects shows that use of assertions is related to fewer defects (Casalnuovo et al. 2015) (though note that I said "related": we have no evidence that assertions actually prevent defects. It may be possible that developers who use assertions are just better at avoiding defects.)

Assertions are related to the broader category of error handling language features. Error handling includes assertions, but also programming language features like exceptions and exception handlers. Error handling is a form of specification in that checking for errors usually entails explicitly specifying the conditions that determine an error. For example, in the code above, the condition Number.isInteger(a) specifies that the parameter a must be an integer. Other exception handling code such as the Java throws statement indicates the cases in which errors can occur and the corresponding catch statement indicates what is to done about errors. It is difficult to implement good exception handling that provides granular, clear ways of recovering from errors (Chen et al. 2009). Evidence shows that modern developers are still exceptionally bad at designing for errors; one study found that errors are not designed for, few errors are tested for, and exception handling is often overly general, providing little ability for users to understand errors or for developers to debug them (Ebert et al. 2015). These difficulties appear to be because it is difficult to imagine the vast range of errors that can occur (Maxion & Olszewski 2000).

-

Researchers have invented many forms of specification that require more work and more thought to write, but can be used to make stronger guarantees about correctness (Woodcock et al. 2009). For example, many languages support the expression of formal pre-conditions and post-conditions that represent contracts that must be kept. (Formal means mathematical, facilitating mathematical proofs that these conditions are met). Because these contracts are essentially mathematical promises, we can build tools that automatically read a function's code and verify that what it computes exhibits those mathematical properties using automated theorem proving systems. For example, suppose we wrote some formal specifications for our example above to replace our assertions (using a fictional notation for illustration purposes):

+

Researchers have invented many forms of specification that require more work and more thought to write, but can be used to make stronger guarantees about correctness (Woodcock et al. 2009). For example, many languages support the expression of formal pre-conditions and post-conditions that represent contracts that must be kept for the program to be corect. (Formal means mathematical, facilitating mathematical proofs that these conditions are met). Because these contracts are essentially mathematical promises, we can build tools that automatically read a function's code and verify that what it computes exhibits those mathematical properties using automated theorem proving systems. For example, suppose we wrote some formal specifications for our example above to replace our assertions (using a fictional notation for illustration purposes):

 // pre-conditions: a in Integers, b in Integers
@@ -84,7 +84,7 @@ function min(a, b) {
 }		
 		
-

The annotations above require that, no matter what, the inputs have to be integers and the output has to be less than or equal to both values. The automatic theorem prover can then start with the claim that result is always less than or equal to both and begin searching for a counterexample. Can you find a counterexample?

+

The annotations above require that, no matter what, the inputs have to be integers and the output has to be less than or equal to both values. The automatic theorem prover can then start with the claim that result is always less than or equal to both and begin searching for a counterexample. Can you find a counterexample? Really try. Think about what you're doing while you try: you're probably experimenting with different inputs to identify arguments that violate the contract. That's similar to what automatic theorem provers do, but they use many tricks to explore large possible spaces of inputs all at once, and they do it very quickly.

There are definite tradeoffs with writing detailed, formal specifications. The benefits are clear: many companies have written formal functional specifications in order to make completely unambiguous the required behavior of their code, particularly systems that are capable of killing people or losing money, such as flight automation software, banking systems, and even compilers that create executables from code (Woodcock et al. 2009). In these settings, it's worth the effort of being 100% certain that the program is correct because if it's not, people can die.

From d4c4c505729fbbdeed28599959298c861ed5252c Mon Sep 17 00:00:00 2001 From: Andy Ko Date: Tue, 23 Apr 2019 13:41:30 -0700 Subject: [PATCH 3/4] Fixed a few typos. --- process.html | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/process.html b/process.html index 32a0513..a68efba 100644 --- a/process.html +++ b/process.html @@ -24,13 +24,12 @@ Credit: public domain -

Process

Andrew J. Ko

So you know what you're going to build and how you're going to build it. What process should you go about building it? Who's going to build what? What order should you build it in? How do you make sure everyone is in sync while you're building it? And most importantly, how to do you make sure you build well and on time? These are fundamental questions in software engineering with many potential answers. Unfortunately, we still don't know which of those answers are right.

-

At the foundation of all of these questions are basic matters of project management: plan, execute, and monitor. But developers in the 1970's and on found that traditional project management ideas didn't seem to work. The earliest process ideas followed a "waterfall" model, in which a project begins by identifying requirements, writing specifications, implementing, testing, and releasing, all under the assumption that every stage could be fully tested and verified. (Recognize this? It's the order of topics we're discussing in this class!). Many managers seemed to like the waterfall model because it seemed structured and predictable; however, because most managers were originally software developers, they preferred a structured approach to project management (Weinberg 1982). The reality, however, was that no matter how much verification one did of each of these steps, there always seemed to be more information in later steps that caused a team to reconsider it's earlier decision (e.g., imagine a customer liked a requirement when it was described in the abstract, but when it was actually built, they rejected it, because they finally saw what the requirement really meant).

+

At the foundation of all of these questions are basic matters of project management: plan, execute, and monitor. But developers in the 1970's and on found that traditional project management ideas didn't seem to work. The earliest process ideas followed a "waterfall" model, in which a project begins by identifying requirements, writing specifications, implementing, testing, and releasing, all under the assumption that every stage could be fully tested and verified. (Recognize this? It's the order of topics we're discussing in this class!) Many managers seemed to like the waterfall model because it seemed structured and predictable; however, because most managers were originally software developers, they preferred a structured approach to project management (Weinberg 1982). The reality, however, was that no matter how much verification one did of each of these steps, there always seemed to be more information in later steps that caused a team to reconsider it's earlier decision (e.g., imagine a customer liked a requirement when it was described in the abstract, but when it was actually built, they rejected it, because they finally saw what the requirement really meant).

In 1988, Barry Boehm proposed an alternative to waterfall called the Spiral model (Boehm 1988): rather than trying to verify every step before proceeding to the next level of detail, prototype every step along the way, getting partial validation, iteratively converging through a series of prototypes toward both an acceptable set of requirements and an acceptable product. Throughout, risk assessment is key, encouraging a team to reflect and revise process based on what they are learning. What was important about these ideas were not the particulars of Boehm's proposed process, but the disruptive idea that iteration and process improvement are critical to engineering great software.

From 30d5265868dc0d26491de00b0c1dcd0cbc87ac4c Mon Sep 17 00:00:00 2001 From: Andy Ko Date: Tue, 23 Apr 2019 14:10:00 -0700 Subject: [PATCH 4/4] Fixed #39, citing study on becoming agile. --- process.html | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/process.html b/process.html index a68efba..e0265ad 100644 --- a/process.html +++ b/process.html @@ -64,7 +64,7 @@

Note that none of these had any empirical evidence to back them. Moreover, Beck described in his original proposal that these ideas were best for "outsourced or in-house development of small- to medium-sized systems where requirements are vague and likely to change", but as industry often does, it began hyping it as a universal solution to software project management woes and adopted all kinds of combinations of these ideas, adapting them to their existing processes. In reality, the value of XP appears to depend on highly project-specific factors (Müller & Padberk 2013), while the core ideas that industry has adopted are valuing feedback, communication, simplicity, and respect for individuals and the team (Sharp & Robinson 2004). Researchers continue to investigate the merits of the list above; for example, numerous studies have investigated the effects of pair programming on defects, finding small but measurable benefits (di Bella et al. 2012)

-

At the same time, Beck began also espousing the idea of "Agile" methods, which celebrated many of the values underlying Extreme Programming, such as focusing on individuals, keeping things simple, collaborating with customers, and being iterative. This idea of begin agile was even more popular and spread widely in industry and research, even though many of the same ideas appeared much earlier in Boehm's work on the Spiral model. Researchers found that Agile methods can increase developer enthusiasm (Syed-Abdulla et al. 2006), that agile teams need different roles such as Mentor, Co-ordinator, Translator, Champion, Promoter, and Terminator (Hoda et al. 2010), and that teams are combing agile methods with all kinds of process ideas from other project management frameworks such as Scrum (meet daily to plan work, plan two-week sprints, maintain a backlog of work) and Kanban (visualize the workflow, limit work-in-progress, manage flow, make policies explicit, and implement feedback loops) (Al-Baik & Miller 2015). I don't define any of these ideas here because there aren't standard definitions to share.

+

At the same time, Beck began also espousing the idea of "Agile" methods, which celebrated many of the values underlying Extreme Programming, such as focusing on individuals, keeping things simple, collaborating with customers, and being iterative. This idea of begin agile was even more popular and spread widely in industry and research, even though many of the same ideas appeared much earlier in Boehm's work on the Spiral model. Researchers found that Agile methods can increase developer enthusiasm (Syed-Abdulla et al. 2006), that agile teams need different roles such as Mentor, Co-ordinator, Translator, Champion, Promoter, and Terminator (Hoda et al. 2010), and that teams are combing agile methods with all kinds of process ideas from other project management frameworks such as Scrum (meet daily to plan work, plan two-week sprints, maintain a backlog of work) and Kanban (visualize the workflow, limit work-in-progress, manage flow, make policies explicit, and implement feedback loops) (Al-Baik & Miller 2015). Research has also found that transitioning a team to Agile methods is slow and complex because it requires everyone on a team to change their behavior, beliefs, and practices (Hoda & Noble 2017).

Ultimately, all of this energy around process ideas in industry is exciting, but disorganized. None of these efforts really get to the core of what makes software projects difficult to manage. One effort in research to get to this core by contributing new theories that explain these difficulties. The first is Herbsleb's Socio-Technical Theory of Coordination (STTC). The idea of the theory is quite simple: technical dependencies in engineering decisions (e.g., this function calls this other function, this data type stores this other data type) define the social constraints that the organization must solve in a variety of ways to build and maintain software (Herbsleb & Mockus 2003, Herbsleb 2016). The better the organization builds processes and awareness tools to ensure that the people who own those engineering dependencies are communicating and aware of each others' work, the fewer defects that will occur. Herbsleb referred this alignment as sociotechnical congruence, and conducted a number of studies demonstrating its predictive and explanatory power.

@@ -88,6 +88,7 @@

James D. Herbsleb and Audris Mockus. 2003. Formulation and preliminary test of an empirical theory of coordination in software engineering. In Proceedings of the 9th European software engineering conference held jointly with 11th ACM SIGSOFT international symposium on Foundations of software engineering (ESEC/FSE-11). ACM, New York, NY, USA, 138-137.

James Herbsleb. 2016. Building a socio-technical theory of coordination: why and how. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2016). ACM, New York, NY, USA, 2-10.

Rashina Hoda, James Noble, and Stuart Marshall. 2010. Organizing self-organizing teams. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1 (ICSE '10), Vol. 1. ACM, New York, NY, USA, 285-294.

+

Hoda, R., & Noble, J. (2017). Becoming agile: a grounded theory of agile transitions in practice. In Proceedings of the 39th International Conference on Software Engineering (pp. 141-151). IEEE Press.

Andrew J. Ko, Robert DeLine, and Gina Venolia. 2007. Information Needs in Collocated Software Development Teams. In Proceedings of the 29th international conference on Software Engineering (ICSE '07). IEEE Computer Society, Washington, DC, USA, 344-353.

Andrew J. Ko (2017). A Three-Year Participant Observation of Software Startup Software Evolution. International Conference on Software Engineering (ICSE), Software Engineering in Practice, to appear.

Ekrem Kocaguneli, Thomas Zimmermann, Christian Bird, Nachiappan Nagappan, and Tim Menzies. 2013. Distributed development considered harmful? In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 882-890.