Exploring Expanded Validation: Including Externalities such as Energy

Validation is an ongoing area of discussion with regard to systems work and applications work, e.g. [1,2,3,4]. I argue here that we need to reconsider the range of issues to be included in validation.  Validation should incorporate costs and benefits that are normally consider “side issues” or non-issues such as waste, energy, economic impact, socio-political impact, and privacy of information. From an economic perspective, one might name these externalities [5]. Like all externalities, some issues (such as waste) may never be directly tied back to the work we do. However, in the end, even that may become relevant, should user values come to encompass it. Additionally, one could argue for a moral imperative to consider such externalities in validating what we do.

Why am I arguing that this broader view is important for HCI? Because many issues that are currently outside our approaches to validation can affect usability (e.g., energy use affects battery life), lead to harm (e.g. privacy problems leading to security violations) or affect viability (e.g. is an innovation likely to be affordable to the population it’s intended for). Why is this important to research? Because just as we have to innovate to develop systems that are efficient and scalable and usable, we may have to innovate to develop systems that are energy efficient, secure, low-waste, and so on.

As an example, consider the issue of energy use. The relative energy use of two solutions to a systems (or applications) problem can vary significantly. It is important to consider not only the energy use of the code being run, but the overall impact on the number of devices a person may own, and the expected lifetime of a device. One can go further and use techniques such as lifecycle assessment [6]. If we expand from energy to sustainability, it makes sense to report ecological impacts, such as projected waste or production-based waste.  Energy use and sustainability also spill over into other issues as illustrated in the table below – the needs of multiple stakeholders need to be balanced against resources consumed, the needs of other species & etc. Indirect benefits (such as increased civic engagement, or the opportunity for re-appropriation) may also be important. A full validation might be expected to include metrics for scalability, efficiency, energy use, memory use, physical space use (in the case of mobile and ubiquitous devices), and so on.

Environmental Questions: Societal Questions: Economic Questions: Computer Science Questions: Other Questions:
How much waste is generated/ saved? How does it address the needs and values of different stakeholders?
Who are they?
What are the costs over the lifecycle of the product? Can we characterize the efficiency of the solution? How about the energy use?  And the space use? How relevant will this be in 10 years? 20 years?
How many resources are consumed? Does it engage citizens in learning about energy/sustainability
(e.g., through citizen science)
What other externalities are need to be  considered? Does it scale? How
flexible is the solution?
What predictions about the future need to be considered in answering these questions?
How are the needs of other species affected? Does it support civic engagement? Does it support re-appropriation

Some of these things are relatively easy to consider in validation, but many of them require us to draw from techniques that we have not previously been familiar with. Within the field of HCI, there is a history of introducing methods that help to address specific issues such as these. An example is the development of value-sensitive design [7], which can be used to explore the effect of an interactive product across stakeholders. There is a need for further innovation in terms of the methods and metrics that ought to be a component of validating our work.


1.   Olsen, D. R. Evaluating user interface systems research. In Proc. UIST ’07, 251-258.

2.  Gray, W. D., & Salzman, M. C. (1998). Damaged merchandise? A review of experiments that compare usability evaluation methods. Human-Computer Interaction, 13(3), 203-261.

3.  Carter, S., Mankoff, J., Klemmer, S. R. & Matthews, T. (2008). Exiting the cleanroom: On ecological validity and ubiquitous computing. Human-Computer Interaction, 23(1), 47-99.

4.  Greenberg, S. & Buxton, B. (2008). Usability evaluation considered harmful (some of the time). CHI’08, 111-120.

5.  http://en.wikipedia.org/wiki/Externality

6.  Hendrickson, C. T., Lave, L. & Matthews, S. H. (2006). Environmental life cycle assessment of goods and services: An input-output approach. Resources for the Future, Washington, DC.

7.  Friedman, B., Kahn, P. H., Jr., & Borning, A. (2006). Value Sensitive Design and information systems. In P. Zhang & D. Galletta (eds.), Human-computer interaction in management information systems: Foundations (pp. 348-372). Armonk, New York; London, England: M.E. Sharpe.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s