mirror of
https://github.com/amyjko/cooperative-software-development
synced 2024-12-25 21:58:15 +01:00
added links, fixed inline links
This commit is contained in:
parent
256e8c8455
commit
cfa9d20781
1 changed files with 10 additions and 10 deletions
|
@ -104,7 +104,7 @@ function count(input) {
|
|||
|
||||
<p>One popular type of automatic program analysis tools is a <strong>static analysis</strong> tool. These tools read programs and identify potential defects using the types of formal proofs like the ones above. They typically result in a set of warnings, each one requiring inspection by a developer to verify, since some of the warnings may be false positives (something the tool thought was a defect, but wasn't). Although static analysis tools can find many kinds of defects, they aren't yet viewed by developers to be that useful because the false positives are often large in number and the way they are presented make them difficult to understand (<a href="#johnson">Johnson et al. 2013</a>). There is one exception to this, and it's a static analysis tool you've likely used: a compiler. Compilers verify the correctness of syntax, grammar, and for statically-typed languages, the correctness of types. As I'm sure you've discovered, compiler errors aren't always the easiest to comprehend, but they do find real defects automatically. The research community is just searching for more advanced ways to check more advanced specifications of program behavior.</p>
|
||||
|
||||
<p>Not all analytical techniques rely entirely on logic. In fact, one of the most popular methods of verification in industry are <strong>code reviews</strong>, also known as <em>inspections</em>. The basic idea of an inspection is to read the program analytically, following the control and data flow inside the code to look for defects. This can be done alone, in groups, and even included as part of process of integrating changes, to verify them before they are committed to a branch. Modern code reviews, while informal, help find defects, stimulate knowledge transfer between developers, increase team awareness, and help identify alternative implementations that can improve quality (<a href="bacchelli">Bacchelli & Bird 2013</a>). One study found that measures of how much a developer knows about an architecture can increase 66% to 150% depending on the project (<a href="#rigby2">Rigby & Bird 2013</a>). That said, not all reviews are created equal: the best ones are thorough and conducted by a reviewer with strong familiarity with the code (<a href="#kononenko">Kononenko et al. 2016</a>); including reviewers that do not know each other or do not know the code can result in longer reviews, especially when run as meetings (<a href="#seaman">Seaman & Basili 1997</a>). Soliciting reviews asynchronously by allowing developers to request reviewers of their peers is generally much more scalable (<a href="#rigby">Rigby & Storey 2011</a>), but this requires developers to be careful about which reviews they invest in.</p>
|
||||
<p>Not all analytical techniques rely entirely on logic. In fact, one of the most popular methods of verification in industry are <strong>code reviews</strong>, also known as <em>inspections</em>. The basic idea of an inspection is to read the program analytically, following the control and data flow inside the code to look for defects. This can be done alone, in groups, and even included as part of process of integrating changes, to verify them before they are committed to a branch. Modern code reviews, while informal, help find defects, stimulate knowledge transfer between developers, increase team awareness, and help identify alternative implementations that can improve quality (<a href="#bacchelli">Bacchelli & Bird 2013</a>). One study found that measures of how much a developer knows about an architecture can increase 66% to 150% depending on the project (<a href="#rigby2">Rigby & Bird 2013</a>). That said, not all reviews are created equal: the best ones are thorough and conducted by a reviewer with strong familiarity with the code (<a href="#kononenko">Kononenko et al. 2016</a>); including reviewers that do not know each other or do not know the code can result in longer reviews, especially when run as meetings (<a href="#seaman">Seaman & Basili 1997</a>). Soliciting reviews asynchronously by allowing developers to request reviewers of their peers is generally much more scalable (<a href="#rigby">Rigby & Storey 2011</a>), but this requires developers to be careful about which reviews they invest in.</p>
|
||||
|
||||
<p>Beyond these more technical considerations around verifying a program's correctness are organizational issues around different software qualities. For example, different organizations have different sensitivities to defects. If a $0.99 game on the app store has a defect, that might not hurt its sales much, unless that defect prevents a player from completing the game. If Boeing's flight automation software has a defect, hundreds of people might die. The game developer might do a little manual play testing, release, and see if anyone reports a defect. Boeing will spend years proving mathematically with automatic program analysis that every line of code does what is intended, and repeating this verification every time a line of code changes. What type of verification is right for your team depends entirely on what you're building, who's using it, and how they're depending on it.</p>
|
||||
|
||||
|
@ -113,15 +113,15 @@ function count(input) {
|
|||
<h2>Further reading</h2>
|
||||
|
||||
<small>
|
||||
<p id="ahmed">Iftekhar Ahmed, Rahul Gopinath, Caius Brindescu, Alex Groce, and Carlos Jensen. 2016. <a href="https://doi.org/10.1145/2950290.2950324">Can testedness be effectively measured?</a> In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2016). ACM, New York, NY, USA, 547-558.</p>
|
||||
<p id="bacchelli">Alberto Bacchelli and Christian Bird. 2013. Expectations, outcomes, and challenges of modern code review. In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 712-721.</p>
|
||||
<p id="beller">Moritz Beller, Georgios Gousios, Annibale Panichella, and Andy Zaidman. 2015. <a href="https://doi.org/10.1145/2786805.2786843">When, how, and why developers (do not) test in their IDEs</a>. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). ACM, New York, NY, USA, 179-190.</p>
|
||||
<p id="johnson">Brittany Johnson, Yoonki Song, Emerson Murphy-Hill, and Robert Bowdidge. 2013. Why don't software developers use static analysis tools to find bugs? In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 672-681.</p>
|
||||
<p id="kononenko">Oleksii Kononenko, Olga Baysal, and Michael W. Godfrey. 2016. <a href="https://doi.org/10.1145/2884781.2884840">Code review quality: how developers see it</a>. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 1028-1038.</p>
|
||||
<p id="pham">Raphael Pham, Stephan Kiesling, Olga Liskin, Leif Singer, and Kurt Schneider. 2014. <a href="http://dx.doi.org/10.1145/2635868.2635925">Enablers, inhibitors, and perceptions of testing in novice software teams</a>. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2014). ACM, New York, NY, USA, 30-40.</p>
|
||||
<p id="rigby">Peter C. Rigby and Margaret-Anne Storey. 2011. <a href="https://doi.org/10.1145/1985793.1985867">Understanding broadcast based peer review on open source software projects</a>. In Proceedings of the 33rd International Conference on Software Engineering (ICSE '11). ACM, New York, NY, USA, 541-550.</p>
|
||||
<p id="rigby2">Peter C. Rigby and Christian Bird. 2013. <a href="http://dx.doi.org/10.1145/2491411.2491444">Convergent contemporary software peer review practices</a>. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2013). ACM, New York, NY, USA, 202-212.</p>
|
||||
<p id="seaman">Carolyn B. Seaman and Victor R. Basili. 1997. <a href="http://dx.doi.org/10.1145/253228.253248">An empirical study of communication in code inspections</a>. In Proceedings of the 19th international conference on Software engineering (ICSE '97). ACM, New York, NY, USA, 96-106.</p>
|
||||
<p id="ahmed">Iftekhar Ahmed, Rahul Gopinath, Caius Brindescu, Alex Groce, and Carlos Jensen. 2016. <a href="https://doi.org/10.1145/2950290.2950324" target="_blank">Can testedness be effectively measured?</a> In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2016). ACM, New York, NY, USA, 547-558.</p>
|
||||
<p id="bacchelli">Alberto Bacchelli and Christian Bird. 2013. <a href="http://dl.acm.org/citation.cfm?id=2486882" target="_blank">Expectations, outcomes, and challenges of modern code review</a>. In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 712-721.</p>
|
||||
<p id="beller">Moritz Beller, Georgios Gousios, Annibale Panichella, and Andy Zaidman. 2015. <a href="https://doi.org/10.1145/2786805.2786843" target="_blank">When, how, and why developers (do not) test in their IDEs</a>. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). ACM, New York, NY, USA, 179-190.</p>
|
||||
<p id="johnson">Brittany Johnson, Yoonki Song, Emerson Murphy-Hill, and Robert Bowdidge. 2013. <a href="http://ieeexplore.ieee.org/abstract/document/6606613" target="_blank">Why don't software developers use static analysis tools to find bugs?</a> In Proceedings of the 2013 International Conference on Software Engineering (ICSE '13). IEEE Press, Piscataway, NJ, USA, 672-681.</p>
|
||||
<p id="kononenko">Oleksii Kononenko, Olga Baysal, and Michael W. Godfrey. 2016. <a href="https://doi.org/10.1145/2884781.2884840" target="_blank">Code review quality: how developers see it</a>. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 1028-1038.</p>
|
||||
<p id="pham">Raphael Pham, Stephan Kiesling, Olga Liskin, Leif Singer, and Kurt Schneider. 2014. <a href="http://dx.doi.org/10.1145/2635868.2635925" target="_blank">Enablers, inhibitors, and perceptions of testing in novice software teams</a>. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2014). ACM, New York, NY, USA, 30-40.</p>
|
||||
<p id="rigby">Peter C. Rigby and Margaret-Anne Storey. 2011. <a href="https://doi.org/10.1145/1985793.1985867" target="_blank">Understanding broadcast based peer review on open source software projects</a>. In Proceedings of the 33rd International Conference on Software Engineering (ICSE '11). ACM, New York, NY, USA, 541-550.</p>
|
||||
<p id="rigby2">Peter C. Rigby and Christian Bird. 2013. <a href="http://dx.doi.org/10.1145/2491411.2491444" target="_blank">Convergent contemporary software peer review practices</a>. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2013). ACM, New York, NY, USA, 202-212.</p>
|
||||
<p id="seaman">Carolyn B. Seaman and Victor R. Basili. 1997. <a href="http://dx.doi.org/10.1145/253228.253248" target="_blank">An empirical study of communication in code inspections</a>. In Proceedings of the 19th international conference on Software engineering (ICSE '97). ACM, New York, NY, USA, 96-106.</p>
|
||||
|
||||
</small>
|
||||
|
||||
|
|
Loading…
Reference in a new issue