mirror of
https://github.com/amyjko/cooperative-software-development
synced 2024-12-26 21:58:27 +01:00
Writing improvemens.
This commit is contained in:
parent
03cd610312
commit
c314e4b6b7
1 changed files with 4 additions and 2 deletions
|
@ -48,13 +48,15 @@
|
|||
|
||||
<p>It's also hard to monitor performance without actually <em>harming</em> performance. Many tools and services (e.g., <a href="https://newrelic.com/">New Relic</a>) are getting better at reducing this overhead and offering real time data about performance problems through sampling.</p>
|
||||
|
||||
<p>Monitoring for data breaches, identity theft, and other security and privacy concerns are incredibly important parts of running a service, but also very challenging. This is partly because the tools for doing this monitoring are not yet well integrated, requiring each team to develop its own practices and monitoring infrastructure. But it's also because protecting data and identity is more than just detecting and blocking malicious payloads, but also about recovering from ones that get through, developing reliable data streams about application network activity, monitoring for anomalies and trends in those streams, and developing practices for tracking and responding to warnings that your monitoring system might generate. Researchers are still actively inventing more scalable, usable, and deployable techniques for all of these activities.</p>
|
||||
<p>Monitoring for data breaches, identity theft, and other security and privacy concerns are incredibly important parts of running a service, but also very challenging. This is partly because the tools for doing this monitoring are not yet well integrated, requiring each team to develop its own practices and monitoring infrastructure. But it's also because protecting data and identity is more than just detecting and blocking malicious payloads. It's also about recovering from ones that get through, developing reliable data streams about application network activity, monitoring for anomalies and trends in those streams, and developing practices for tracking and responding to warnings that your monitoring system might generate. Researchers are still actively inventing more scalable, usable, and deployable techniques for all of these activities.</p>
|
||||
|
||||
<p>The biggest limitation of the monitoring above is that it only reveals <em>what</em> people are doing with your software, not <em>why</em> they are doing it, or why it has failed. Monitoring can help you know that a problem exists, but it can't tell you why a program failed or why a persona failed to use your software successfully.</p>
|
||||
|
||||
<h2>Discovering Missing Requirements</h2>
|
||||
|
||||
<p>Usability problems and missing features, unlike some of the preceding problems, are even harder to detect or observe, because the only true indicator that something is hard to use is in a user's mind. That said, there are a couple of approaches to detecting the possibility of usability problems.</p>
|
||||
|
||||
<p>One is by monitoring application usage. Assuming your users will tolerate being watched, there are many techniques for automatically instrumenting applications for user interaction events, for mining these events for problematic patterns, and for browsing and analyzing these patterns for more subjective issues (<a href="#ivory">Ivory & Hearst 2001</a>). Modern tools and services like <a href="https://www.intercom.com/">Intercom</a> make it easier to capture, store, and analyze this usage data, although they still require you to have some upfront intuition about what to monitor. More advanced, experimental techniques in research automatically analyze undo events as indicators of usability problems (<a href="#akers">Akers et al. 2009</a>).</p>
|
||||
<p>One is by monitoring application usage. Assuming your users will tolerate being watched, there are many techniques: 1) automatically instrumenting applications for user interaction events, 2) mining events for problematic patterns, and 3) browsing and analyzing patterns for more subjective issues (<a href="#ivory">Ivory & Hearst 2001</a>). Modern tools and services like <a href="https://www.intercom.com/">Intercom</a> make it easier to capture, store, and analyze this usage data, although they still require you to have some upfront intuition about what to monitor. More advanced, experimental techniques in research automatically analyze undo events as indicators of usability problems (<a href="#akers">Akers et al. 2009</a>); this work observes that undo is often an indicator of a mistake in creative software, and mistakes are often indicators of usability problems.</p>
|
||||
|
||||
<p>All of the usage data above can tell you <em>what</em> your users are doing, but not <em>why</em>. For this, you'll need to get explicit feedback from support tickets, support forums, product reviews, and other critiques of user experience. Some of these types of reports go directly to engineering teams, becoming part of bug reporting systems, while others end up in customer service or marketing departments. While all of this data is valuable for monitoring user experience, most companies still do a bad job of using anything but bug reports to improve user experience, overlooking the rich insights in customer service interactions (<a href="#chilana2">Chilana et al. 2011</a>).</p>
|
||||
|
||||
|
|
Loading…
Reference in a new issue