mirror of
https://github.com/amyjko/cooperative-software-development
synced 2024-12-25 21:58:15 +01:00
Fixed #61, elaborating on evolution.
This commit is contained in:
parent
b7e8696bfb
commit
166beb6450
3 changed files with 12 additions and 3 deletions
|
@ -89,6 +89,7 @@
|
|||
"dybå02": "Dybå, T. (2002). [Enabling software process improvement: an investigation of the importance of organizational issues|https://doi.org/10.1145/940071.940092]. Empirical Software Engineering, 7(4), 387-390.",
|
||||
"dybå03": "Tore Dybå (2003). [Factors of software process improvement success in small and large organizations: an empirical study in the scandinavian context|http://dx.doi.org/10.1145/940071.940092]. In Proceedings of the 9th European software engineering conference held jointly with 11th ACM SIGSOFT international symposium on Foundations of software engineering (ESEC/FSE-11). ACM, New York, NY, USA, 148-157.",
|
||||
"ebert15": "Ebert, F., Castor, F., and Serebrenik, A. (2015). [An exploratory study on exception handling bugs in Java programs|https://doi.org/10.1016/j.jss.2015.04.066]. Journal of Systems and Software, 106, 82-101.",
|
||||
"eisenstadt97": "Eisenstadt, M. (1997). [My hairiest bug war stories|https://doi.org/10.1145/248448.248456]. Communications of the ACM, 40(4), 30-37.",
|
||||
"endrikat14": "Stefan Endrikat, Stefan Hanenberg, Romain Robbes, and Andreas Stefik. 2014. [How do API documentation and static typing affect API usability?|https://doi.org/10.1145/2568225.2568299] In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 632-642.",
|
||||
"ernst15": "Neil A. Ernst, Stephany Bellomo, Ipek Ozkaya, Robert L. Nord, and Ian Gorton. 2015. [Measure it? Manage it? Ignore it? Software practitioners and technical debt|https://doi.org/10.1145/2786805.2786848]. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). ACM, New York, NY, USA, 50-60.",
|
||||
"fairbanks10": "Fairbanks, G. (2010). [Just enough software architecture: a risk-driven approach|https://www.amazon.com/Just-Enough-Software-Architecture-Risk-Driven/dp/0984618104]. Marshall & Brainerd.",
|
||||
|
@ -189,7 +190,9 @@
|
|||
"rigby13": "Peter C. Rigby and Christian Bird. 2013. [Convergent contemporary software peer review practices|http://dx.doi.org/10.1145/2491411.2491444]. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2013). ACM, New York, NY, USA, 202-212.",
|
||||
"rigby16": "Peter C. Rigby, Yue Cai Zhu, Samuel M. Donadelli, and Audris Mockus. 2016. [Quantifying and mitigating turnover-induced knowledge loss: case studies of chrome and a project at Avaya|https://doi.org/10.1145/2884781.2884851]. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 1006-1016.",
|
||||
"roehm12": "Tobias Roehm, Rebecca Tiarks, Rainer Koschke, and Walid Maalej. 2012. http://dl.acm.org/citation.cfm?id=2337254 How do professional developers comprehend software? In Proceedings of the 34th International Conference on Software Engineering (ICSE '12). IEEE Press, Piscataway, NJ, USA, 255-265.",
|
||||
"rothermel96": "Rothermel, G., & Harrold, M. J. (1996). [Analyzing regression test selection techniques|https://doi.org/10.1109/32.536955]. IEEE Transactions on software engineering, 22(8), 529-551.",
|
||||
"rubin16": "Julia Rubin and Martin Rinard. 2016. [The challenges of staying together while moving fast: an exploratory study|https://doi.org/10.1145/2884781.2884871]. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 982-993.",
|
||||
"runeson06": "Runeson, P. (2006). [A survey of unit testing practices|https://doi.org/10.1109/MS.2006.91]. IEEE Software, 23(4), 22-29.",
|
||||
"salvaneschi14": "Guido Salvaneschi, Sven Amann, Sebastian Proksch, and Mira Mezini. 2014. https://doi.org/10.1145/2635868.2635895 An empirical study on program comprehension with reactive programming. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2014). ACM, New York, NY, USA, 564-575.",
|
||||
"santos16": "Ronnie E. S. Santos, Fabio Q. B. da Silva, Cleyton V. C. de Magalhães, and Cleviton V. F. Monteiro. 2016. [Building a theory of job rotation in software engineering from an instrumental case study|https://doi.org/10.1145/2884781.2884837]. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 971-981.",
|
||||
"schiller14": "Schiller, T. W., Donohue, K., Coward, F., & Ernst, M. D. (2014). [Case studies and tools for contract specifications|https://doi.org/10.1145/2568225.2568285]. In Proceedings of the 36th International Conference on Software Engineering (pp. 596-607).",
|
||||
|
|
|
@ -68,4 +68,6 @@ Ultimately, all of these strategies are essentially search algorithms, seeking t
|
|||
|
||||
Once you've found the defect, what do you do? It turns out that there are usually many ways to repair a defect. How developers fix defects depends a lot on the circumstances: if they're near a release, they may not even fix it if it's too risky; if there's no pressure, and the fix requires major changes, they may refactor or even redesign the program to prevent the failure<murphyhill13>. This can be a delicate, risky process: in one study of open source operating systems bug fixes, 27% of the incorrect fixes were made by developers who had never read the source code files they changed, suggesting that key to correct fixes is a deep comprehension of exactly how the defective code is intended to behave<yin11>.
|
||||
|
||||
This risks suggest the importance of *impact analysis*<arnold96>, the activity of systematically and precisely analyzing the consequences of some proposed fix. This can involve analyzing dependencies that are affected by a bug fix, re-running manual and automated tests, and perhaps even running users tests to ensure that the way in which you fixed a bug does not inadvertently introduce problems with usability or workflow. Debugging is therefore like surgery: slow, methodical, purposeful, and risk-averse.
|
||||
---
|
||||
|
||||
Because debugging can be so challenging, and because it is so pervasive and inescapable in programming, it is often a major source of frustration and unpredictability in software engineering. However, finding a defect after a long search can also be a great triump<eisenstadt97>, bringing together the most powerful aspects of developer tools, the collective knowledge of a team, and the careful, systematic work of a programmer, trying to make sense of code. As with all things in software engineering, persistence and patience is rewarded.
|
|
@ -1,4 +1,4 @@
|
|||
Programs change. You find bugs, you fix them. You discover a new requirement, you add a feature. A requirement changes because users demand it, you revise a feature. The simple fact about programs are that they're rarely stable, but rather constantly changing, living artifacts that shift as much as our social worlds shift.
|
||||
Programs change. You find bugs, you fix them. You discover a new requirement, you add a feature. A requirement changes because users demand it, you revise a feature. The simple fact about programs are that they're rarely stable, but rather constantly changing, living artifacts that shift as much as our social worlds shift.
|
||||
|
||||
Nowhere is this constant evolution more apparent then in our daily encounters with software updates. The apps on our phones are constantly being updated to improve our experiences, while the web sites we visit potentially change every time we visit them, without us noticing. These different models have different notions of who controls changes to user experience: should software companies control when your experience changes or should you? And with systems with significant backend dependencies, is it even possible to give users control over when things change?
|
||||
|
||||
|
@ -16,4 +16,8 @@ Perhaps the most modern form of build practice is *continuous integration* (CI).
|
|||
|
||||
For example, some large projects like Windows can take a whole day to build, making continuous integration of the whole operating system infeasible. When builds and tests are fast, continuous integration can accelerate development, especially in projects with large numbers of contributors<vasilescu15>. Some teams even go further than continuous integration, building continuous _delivery_ systems that ensure that complete builds are readily available for release (potentially multiple times per day for software on the web). Having a repeatable, automated deployment process is key for such processes<chen15>.
|
||||
|
||||
One last problem with changes in software is managing the *releases* of software. Good release management should archive new versions of software, automatically post the version online, make the version accessible to users, keep a history of who accesses the new version, and provide clear release notes describing changes from the previous version<vanderhoek97>. By default, all of this is quite manual, but many of these steps can be automated, streamlining how teams release changes to the world. You've probably encountered these most in the form of software updates to applications and operating systems.
|
||||
One last problem with changes in software is managing the *releases* of software. Good release management should archive new versions of software, automatically post the version online, make the version accessible to users, keep a history of who accesses the new version, and provide clear release notes describing changes from the previous version<vanderhoek97>. By default, all of this is quite manual, but many of these steps can be automated, streamlining how teams release changes to the world. You've probably encountered these most in the form of software updates to applications and operating systems.
|
||||
|
||||
With so many ways that software can change, and so many tools for managing that change, it also becomes important to manage the risk of change. One approach to managing this risk is *impact analysis*<arnold96>, an activity of systematically and precisely analyzing the consequences of a change _before_ making the change. This can involve analyzing dependencies that are affected by a bug fix, running unit tests on smaller parts of an implementation<runeson06>, and running regression tests on previously encountered failures<rothermel96>, and running users tests to ensure that the way in which you fixed a bug does not inadvertently introduce problems with usability, usefulness, or other qualities critical to meeting requirements.
|
||||
|
||||
Impact analysis, and software evolution in general, is therefore ultimately a process of managing change. Change in requirements, change in code, change in data, and change in how software is situated in the world. And like any change management, it must be done cautiously, both to avoid breaking critical functionality, but also ensure that whatever new changes are being brought to the world achieve their goals.
|
Loading…
Reference in a new issue