mirror of
https://github.com/amyjko/cooperative-software-development
synced 2024-12-25 21:58:15 +01:00
Fixed #40, citing software analytics.
This commit is contained in:
parent
30d5265868
commit
717d65c52d
1 changed files with 2 additions and 1 deletions
|
@ -40,7 +40,7 @@
|
|||
|
||||
<h2>Discovering Failures</h2>
|
||||
|
||||
<p>Of course, this is easier said than done. That's because the (ideally) massive numbers of people executing your software is not easily observable. Moreover, each software quality you might want to monitor (performance, functional correctness, usability) requires entirely different methods of observation and analysis. Let's talk about some of the most important qualities to monitor and how to monitor them.</p>
|
||||
<p>Of course, this is easier said than done. That's because the (ideally) massive numbers of people executing your software is not easily observable (<a href="#menzies">Menzies & Zimmerman 2013</a>). Moreover, each software quality you might want to monitor (performance, functional correctness, usability) requires entirely different methods of observation and analysis. Let's talk about some of the most important qualities to monitor and how to monitor them.</p>
|
||||
|
||||
<p>These are some of the easiest failures to detect because they are overt and unambiguous. Microsoft was one of the first organizations to do this comprehensively, building what eventually became known as Windows Error Reporting (<a href="#glerum">Gelrum et al 2009</a>). It turns out that actually capturing these errors at scale and mining them for repeating, reproducible failures is quite complex, requiring classification, progressive data collection, and many statistical techniques to extract signal from noise. In fact, Microsoft has a dedicated team of data scientists and engineers whose sole job is to manage the error reporting infrastructure, monitor and triage incoming errors, and use trends in errors to make decisions about improvements to future releases and release processes. This is now standard practice in most companies and organizations, including other big software companies (Google, Apple, IBM, etc.), as well as open source projects (eg, Mozilla). In fact, many application development platforms now include this as a standard operating system feature.</p>
|
||||
|
||||
|
@ -90,6 +90,7 @@
|
|||
<p id="chilana2">Parmit K. Chilana, Andrew J. Ko, Jacob O. Wobbrock, Tovi Grossman, and George Fitzmaurice. 2011. <a href="http://dx.doi.org/10.1145/1978942.1979270" target="_blank">Post-deployment usability: a survey of current practices</a>. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 2243-2246.</p>
|
||||
<p id="glerum">Kirk Glerum, Kinshuman Kinshumann, Steve Greenberg, Gabriel Aul, Vince Orgovan, Greg Nichols, David Grant, Gretchen Loihle, and Galen Hunt. 2009. <a href="http://dx.doi.org/10.1145/1629575.1629586" target="_blank">Debugging in the (very) large: ten years of implementation and experience</a>. In Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles (SOSP '09). ACM, New York, NY, USA, 103-116.</p>
|
||||
<p id="ivory">Ivory M.Y., Hearst, M.A. (2001). <a href="http://doi.acm.org/10.1145/503112.503114" target="_blank">The state of the art in automating usability evaluation of user interfaces</a>. ACM Computing Surveys, 33(4).</p>
|
||||
<p id="menzies">Menzies, T., & Zimmermann, T. (2013). <a href="https://www.computer.org/csdl/magazine/so/2013/04/mso2013040031/13rRUyY28Wp">Software analytics: so what?</a> IEEE Software, 30(4), 31-37.</a>
|
||||
<p id="kim">Miryung Kim, Thomas Zimmermann, Robert DeLine, and Andrew Begel. 2016. <a href="https://doi.org/10.1145/2884781.2884783" target="_blank">The emerging role of data scientists on software development teams</a>. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 96-107.</p>
|
||||
|
||||
</small>
|
||||
|
|
Loading…
Reference in a new issue