Loading...
 

A DIY Electronic Survey Device for Studying User Experience

Contributor: John T. Sherrill
Affiliation: Qatar University
Email: jsherrill at qu.edu.qa
Released: 9 Sept 2019
Published: To be included on next Kairos Issue

Introduction

This webtext describes a DIY electronic survey used to study participant experiences in spaces where online or print surveys are impractical, explains its design, and demonstrates its usefulness in makerspaces, educational spaces, and other contexts where the previously mentioned research methods pose challenges. I first describe why I built the device for my dissertation research on feminist makerspaces, as well as why I chose to build my own rather than use a commercial kiosk. I then describe what the device does and provide use cases from educational spaces and suggest additional potential use cases. Third, I explain how the survey works as well as how the data is stored in order to highlight the affordances and constraints of this technology. Based on the affordances and constraints of the design, I provide alternate configurations of the device and document my plans for further development. Finally, I provide instructions and assets for building the survey device.

A Summary for Builders

The device described and documented in this webtext has been used to study user experience in two Midwestern makerspaces, providing a consistent measurement to compare user satisfaction across a range of spaces. Due to the relatively low cost of building the device (~$140 or less), the simple setup (plugging it in), and the access to plaintext data (allowing for a range of analytics), it has several advantages for researchers and spaces over existing commercial solutions. First, because the device is open source and low cost, it's particularly fitting for researching DIY/craft/"maker" communities, as many makerspaces have the tools necessary to build this device. It's another tool that spaces and communities can use to get feedback from participants, and to represent community identity through customized interfaces. Second, given that DIY communities often support open source and open access philosophies and technologies, the design of this device as something to be easily customized, remixed, and/or personalized is also more rhetorically fitting than a mass-produced blackboxed kiosk. Third, and finally, the device offers direct access to the data it collects rather than going through a third party. While this last feature does mean that researchers have to produce their own visualizations, it also simplifies the IRB approval process for academic researchers by providing control over who can view or analyze data, and how that data is stored. That is, this device can help avoid concerns about 3rd-party data access during IRB review, and places all responsibility for data storage and privacy on researchers (reducing the chance of a data breach compared with an online storage service). Furthermore, plaintext data helps facilitate sharing information within and across communities in a way that commercially aggregated results do not. The sections included below discuss the theory behind this device in more detail, while the How to Build section includes instructions for building your own survey device.

Considering functional transparency, and inspired by existing devices, I designed a four-button Likert scale of smiling/frowning faces to measure experience ratings (as shown below in Figure 1). The buttons are color coded based on traffic light color associations, with green buttons for smiles and red buttons for frowns. The design was inspired by the HappyOrNot Kiosk, a 4-button survey device widely used to collect user feedback in hospitals, airports, and businesses. However, my design is functionally different from the HappyOrNot Kiosk in that it does not wirelessly transmit data (a potential privacy concern), does not process the data automatically, and isn't based on a paid service model. Instead, researchers control access to the plaintext data, allowing them to process and analyze the data as they wish. Furthermore, considering participant privacy and security, the data is physically secured inside a locked box on an SD card, rather than being transmitted wirelessly or stored on third-party servers. Most significantly though, the device can be custom built or easily modified to meet and reflect the aesthetic and functional needs of individual communities.

Survey device with Likert-scale frowning/smiling faces. A finger presses the happiest face, and a 'Thanks!' sign is lit
Figure 1. The finished DIY survey device

Existing Scholarship

This device also builds on the work of scholars in the field of rhetoric and composition, as well as in technical communication, as much as it does the work of DIY/Craft/“Maker” communities.1   It is my hope that fellow digital humanities researchers find this device useful in studying makerspaces, that members of makerspaces find it useful for self-assessing the effectiveness of spaces, and that instructors find it useful in assessment and teaching. In short, while this project involves soldering, programming, and requires safety glasses, it builds on current and past scholarship in multiple humanities disciplines. Within composition broadly, Jennifer Nichols et al. (2017) have “explore[d] the evolution and role of makerspaces in academic libraries, with a particular focus on how libraries are using innovation spaces in support of entrepreneurship and digital humanities on campus,” (p. 363) specifically positioning library makerspaces as “an unprecedented centralized working space for digital humanists” (p. 367). Similarly, the 2019 Feminisms and Rhetorics conference theme centered “DIY feminist activism” (2018) (Fancher, 2018), Harlot dedicated a special issue to “Craft Rhetorics” in 2015 (Buck et al.), and Ann Shivers-McNair (2017) discussed using innovative research methods to study makerspaces in 2017. More specifically, Shivers-McNair called for using head-mounted cameras to create first-person video recordings in makerspaces. She described how this “3D interviewing” technique was useful in the unique context of a makerspace, as it allowed her to record the sights, sounds, interactions, and motions of participants working in the space, which would otherwise be difficult to capture. By contrast, the device I describe in this webtext aims to collect relatively simple “thin” data, rather than rich recordings, but responds to similar research challenges, as I describe in the next section. Though my device’s analytic value is strengthened by interviews, observations, and other research methods, the device I describe is significant for researchers because it asks only a single question and yields quick results, even with relatively small sample sizes. It is particularly significant because it affords access to user experiences and data that would otherwise be impractically tedious, if not impossible, to collect via online or print surveys, interviews, focus groups, etc. Though at times the data collected is not as qualitatively rich as other methods, this method is largely automated and requires minimal monitoring or active engagement from the researcher.

As most researchers would generally agree, people are more inclined to participate in shorter surveys than longer surveys. Alex R. Trouteaud (2004) provided evidence to support this truism in their study of email surveys. Interestingly, although Trouteaud suggested that short surveys do yield more responses, they found that repeated invitations and the wording of invitations has a larger impact on participation than survey length (p. 389). That said, in the context of makerspaces, where people generally enter and exit frequently and inconsistently, attention to survey length and invitations is especially critical if one expects to collect survey data without access to a central mailing list or website. Consequently, a single-question push-button survey minimizes length and completion time, but the rhetoric of the invitation remains important (and as I suggest in this webtext, the physical design of the survey acts as an invitation in the context of makerspaces). In any case, this short electronic survey device is designed to minimize barriers to participation and to automate what would otherwise be a tedious data collection process for both researchers and participants.

Of course, the meaningfulness of results from a single-question survey is limited. Yet, in the context of healthcare and insurance, Arlene S. Bierman et al. (1999) reported that, based on a sample of nearly 9,000 participants, “The response to a single question about general health status strongly predicts subsequent health care utilization” (p. 56, p. 57). Specifically, “Respondents who rated their health as poor had 675 hospitalizations per 1000 beneficiaries per year compared with 136 per 1000 for those rating their health as excellent,” which also correlated with a “fivefold” difference in spending on health expenses (p. 56). In other words, for the purpose of quickly assessing overall health, a single question was quite useful. Of course, there is inherent risk in attributing too much meaning to such a short survey, not just in healthcare, but especially in the context of academic assessment, and also when studying user experience. As an example of ways to avoid such risky attributions with user experience, the post-industrial smart city, Dubai, provides a useful reference.

Dubai has implemented a “Happiness Meter” as part of its national assessment of tourist and resident happiness in an effort “to become the happiest city on earth,” as described by Sajid M. Khan (2017). This Happiness Meter, “launched across all government entities, with online and offline interfaces” is similar in design to the HappyOrNot Kiosk and other satisfaction surveys in that it asks a single question, but is noteworthy because of its national scale (2017). Regardless of scale, the primary value of the survey data lies in how it’s combined with multiple measures of happiness and satisfaction (Khan et al., 2017; Al-Azzawi et al., 2019). That is, while useful as a single measure, a single-question survey cannot yield results as meaningful as a longer survey, nor can it replace interviews, focus groups, and other research methods. However, these methods do not always easily scale, and require a higher level of investment from participants than simply pushing a button. Furthermore, at a national level as well as a local, interviews and focus groups require considerably more effort from researchers (e.g., recording, transcription, data entry, etc.). My experience studying makerspaces, as described in the following sections, suggests there are significant challenges to recruiting using online and print surveys, and conducting interviews, within public and semi-private spaces. That said, at the national scale of the Dubai example, it’s also important to address ethical concerns about surveillance and government-sponsored assessments. To address such concerns and better serve the public good, Dubai makes some of the data collected available for public use by default (Khan, 2017). Similarly, this webtext addresses the usability, meaningfulness, and ethical considerations of designing and using an open source, single-question, push-button electronic survey device, along with its application for studying makerspaces and other educational spaces.

In addition to being a useful tool for researchers in the digital humanities broadly, building on David M. Sheridan’s (2010) positioning of digital fabrication as rhetorical work, this device may also be useful for instructors who use making in their teaching (see Shipka, 2011; Faris et al., 2018; West-Puckett, 2013; and Tham, 2018 for ways making has been used for teaching within humanities disciplines at colleges and universities). That is, not only is this device a valuable tool for collecting assessment data, but it may also provide an opportunity for students to actively participate in the design (and fabrication) of the assessment process by quite literally constructing the survey themselves. This may be particularly relevant to courses that cover research methods in the digital humanities, composition, and technical communication, and I would further argue that building these types of devices and methods (in addition to using them) is the work of the humanities. Although this webtext focuses on how this device is useful for researchers specifically, the instructions for how to build the device may equally serve instructors.

Why I Designed It: A Problem of Recruiting

This project responds to unforeseen challenges of researching feminist maker/hackerspaces using snowball sampling, as an outsider, for my dissertation. Specifically, the survey device was a response to a perceived lack of data. For my dissertation, in order to study issues of diversity and inclusion within makerspaces, I planned to distribute an online survey, conduct ten or more interviews, and conduct 4–5 site studies of feminist makerspaces within the U.S. I had initially planned to contact 18 Midwestern feminist maker/hacker communities via meetup.com in order to distribute my online survey and learn about potential maker/hackerspaces for site studies. Unexpectedly, after messaging the first meetup group a link to my online survey, and asking if community members had suggestions for local spaces I might study, my meetup.com account was banned within 24 hours. This ban was likely because I was not a member of the first meetup group I contacted, my account was relatively new, and I hadn't had any prior contact with the groups, in addition to being a white cisgender man. Not surprisingly, the majority of the meetup groups did not list alternate contact channels (likely in order to avoid spam, unwanted attention, and online harassment). I was only able to individually contact two of the groups through alternate channels. Despite receiving a positive response from two communities, because of this setback, I initially worried that I might not have enough responses to my online survey. Although I distributed the survey elsewhere, and survey participants provided detailed responses, there weren't enough responses to do the types of cross referencing and triangulation with larger data sets that I had planned (though I did have enough survey data to complete the study). Furthermore, because of the lower survey response rate, I had fewer participants for follow-up interviews than expected. Finally, rather than my proposed 4–5 site studies, due to cancellations and the aforementioned issues contacting spaces, I was only able to observe two makerspaces. 

Concerned, I talked with my chair about the situation. He suggested that I might develop shorter and more pointed online surveys. While this would likely yield a higher response rate than the single longer survey I designed, I was worried that distribution and recruiting would still be a challenge. I had limited options for proceeding with survey distribution: distribute a new shorter survey through the same channels I had used twice previously (at the risk of irritating email list and Facebook group members), create a new meetup account and attempt to work around spam filtering at the risk of another account ban or more permanent ban, or try to find new recruiting channels and further delay data collection. There was also an option of distributing the survey to much larger mainstream "maker" email lists, but given that my research focused on feminist spaces and non-normative groups, doing so would have increased the risk of trolls participating, which would have required more diligent filtering of responses. Given these options, the most ideal possibility seemed to be distributing a print survey within the spaces I had already visited and with which I had already developed working relationships. However, these spaces posed interesting challenges for researchers, many of which are common to makerspaces broadly. 

Unlike classes, businesses, or events, many maker and coworking spaces operate similarly to a library or a gym. That is, membership grants access during certain open hours. But who uses the space, for how long, and for what purposes varies considerably on any given day or week. In some cases, meetups in a space are totally impromptu and are announced only a few hours beforehand via email or social media. Fortunately, both spaces I studied at least had regular operating hours. However, one space is open 24 hours, while the other is only open during regular business hours. Although both spaces held semi-regular events and workshops, attendance varied widely and unpredictably. Some events were impromptu or held ad-hoc. Outside of events and workshops, the spaces are generally open use, and members come and go as they please while working on various projects. Consequently, managers in both spaces were uncertain about what times and days were busiest, and suggested that peak times varied without rhyme or reason. As such, it was hard to get an accurate assessment of each space through observations alone, and I could only observe each space for a few days. To account for this limitation, I placed recruitment flyers in both spaces for my online survey. I assumed that members would see the flyers as they worked in the space, and complete the online survey immediately or at home. This attempt was a total failure, however. No one from either space that I observed participated in the online survey. The same flyer was used successfully for online recruiting though, suggesting that it wasn't an issue of poor flyer design as much as the medium itself. In part, this could be explained by the relative privacy of reading the flyer digitally, given that the survey partially focused on uncomfortable experiences in makerspaces. Additionally, the digital version was more convenient since participants only had to click a link, rather than type a shortened URL or scan a QR code. Given this, I was still a bit skeptical that a print survey would yield higher participation.

In theory, distributing paper surveys might have worked well in these spaces. However, in addition to my skepticism after the failure of the printed flyers, printed surveys would have required spaces to either mail me results or for me to pick them up (an impractical option, given the geographic distance between the spaces I observed and the extra work for participants of mailing data to me). Furthermore, although paper surveys might have improved response rates, they also would have required data entry, which wouldn't be scalable beyond one or two sites at a time. And, as one of the space managers from my dissertation study noted, manually collecting information from members is fine when there are only 50–100 members, but as membership expands, manual data entry into spreadsheets doesn't cut it.

Pondering the constraints of this situation, my dissertation chair suggested that a simple one-question kiosk with smiley and frowny faces might help significantly. Such kiosks are used in many hospitals, doctor's offices, and other similar spaces where quick feedback is very helpful, but where the incentive to respond is often minimal. A kiosk would solve the issues of shortening the survey, distributing the survey and collecting results, and data entry. While the idea was fitting and intriguing, $125 or more for a HappyOrNot kiosk and monthly service charge didn't seem practical for my research budget as a PhD candidate. Finding a grant in a timely manner also seemed unlikely. Given this price point, I felt resigned to the idea of a print survey as the best option. As I thought about manually entering survey data though, the convenience of the kiosk remained appealing. "How does it work?" I wondered on my drive home from campus, reflecting on my experience disassembling printers and other electronics for spare parts and components. "It has four buttons, records a timestamp, and then probably beams that data over wifi or 3G to a server for analysis... it can't possibly cost $125 or $150 for a few buttons and a tiny computer."

"Hmmm, but why should I bother building something if I can just download a survey app to a tablet?" I wondered after reaching home. For the sake of convenience, I checked what existing solutions were available online. After browsing through a few apps, I shuddered, remembering the times I've been asked to provide an email address or phone number at a checkout counter. "Does Party City really need to know my phone number and email? They're not going to believe me if I say I don't have either. I just want my Halloween decorations..." I also realized in that moment how many deceptive Dark Pattern (a concept coined by Harry Brignull in 2010) touchscreen interfaces I'd encountered that kept popping up additional survey questions or text fields (for example, the seemingly unending multi-page surveys at Purdue's student health center, which also asks patients to type in multiple responses). Compared with a touch screen, physical buttons are generally limited in their function. Analog buttons can be pressed or held, depending on the type of button, but they're unlikely to drastically transform once pressed. In short, there's less chance of a "gotcha!" with an analog interface (though what is inside the box behind the interface remains invisible). Even so, a metal box doesn't typically radically transform as an interface. Comparatively, clicking "Next" on a touch screen survey could transform the button, yield a page with further questions, or even lead to an entirely new interface.

Use Cases

As of fall 2018, I have used the survey device to successfully study two Midwestern makerspaces. In just six weeks I received a total of 94 responses, and I was able to analyze and report my findings to each space within days. For me, this speed enabled me to quickly triangulate the results of my online survey, interviews, and site studies in relation to the data from the survey device. I was also able to provide a breakdown of overall participant satisfaction in each space I studied, indicate when each space was most and least busy, and suggest what events were more or less attended based on responses. Each space also kept copies of all data for their own further analysis. Sharing the plaintext data and my analysis with space managers quickly was particularly important because the data collected are "thin." That is, aside from a timestamp and rating, there are many unknowns. Acknowledging the limitations of the data means also respecting that participants in the space or managers are often better able to contextualize and make meaning from survey results than researchers. For example, in both makerspaces I studied, I was aware of the total number of members in each space, but I didn't have a clear sense of how many people used each space on an average day. Although the space managers didn't have an exact number, they could better estimate the response rate and consequently assess whether spikes in responses at particular times were meaningful or just minor variances. This is particularly important if spaces are being assessed or audited by outsiders. The data alone do not tell a meaningful narrative without the firsthand experience and knowledge from the community and its leaders, although this device does highlight the importance of recognizing this expertise for other humanist methods as well. In other words, this is a useful tool for flagging potential issues that would otherwise be hard to identify, but the survey doesn't clearly identify specific issues or evaluate possible causes. It works best in combination with other research and usability methods, particularly participatory methods. That said, the survey device is useful for measuring the following:

  • Ratings over time
  • Ratings by month, week, day, hour, etc.
  • Peak times/dead times
  • Evaluating events, presentations, or workshops
  • Evaluating individual appointments or sessions
  • Attendance rates
  • Comparing responses from different stakeholder/participant roles (e.g., students vs tutors)

​Additionally, modifying the design of the device itself, providing multiple devices, or changing prompts periodically all open up further research opportunities. I discuss a few additional use cases in the Alternate Configurations section, but at the time of writing, I am still testing the survey device in different contexts. The purpose of testing is twofold: to evaluate the usability of the design, and to find out what other researchers are interested in studying given the affordances of this device.

How It Works and Data Storage

Inside the box is an Arduino Uno, which is an open source microcontroller. Microcontrollers are small programmable computers that are particularly useful for interpreting analog and digital input/output. For example, microcontrollers make it easy to detect when a button or key is pressed, and then convert that signal to some type of output, such as a blinking light or turning on a motor. However, microcontrollers have limited storage space and computing power. So, although they're particularly good for automating relatively simple tasks and controlling devices, they're not great for running more complex applications or crunching data. With the help of an Adafruit Data Logger Shield though, the Arduino Uno can log and save input and output to an SD card, acting as a temporary hard drive. 

The whole system runs on 5v electricity (the same as a typical smartphone charger or USB port), and is connected to a wall. The buttons on the outside of the box are wired to the Arduino. When a button is pressed, the Arduino detects which button was pushed, and the Data Logger Shield tracks the date and time, and then writes that data to an SD card as a text file. At the same time, the “Thanks!” light on the box turns on, and stays on for about 2 seconds. While the “Thanks!” light is on, no additional button presses are recorded, which helps prevent someone from spamming the survey. This delay can be adjusted, particularly for situations where buttons may be pushed in quick succession by a group of people (e.g., at the end of a workshop or class) and a 2-second delay might mean missed ratings. Alternatively, if kids or other button-pushing enthusiasts are participating, then a longer delay can help prevent inaccurate results.

When a button is pressed, the collected data is stored as a .txt file on an SD card inserted into the Data Logger Shield inside the locked box. Each time the box is unplugged or powered off it logs to a new file, creating up to 100 files total. A typical log might look like this:

Button,Timestamp
3,2018/11/16 12:46:48
2,2018/11/16 12:46:52
0,2018/11/16 12:46:55

Although it’s simple, this data can be used to track and visualize information quickly. The data can easily be transformed into pie and bar charts using Excel, R, or online data visualization tools. What makes the data even more valuable is being able to connect it with additional quantitative and qualitative information. This is doubly true in contexts where additional data is already (or automatically) recorded digitally. In the context of makerspaces, many already track membership, check-ins or check-outs of the space, event dates and times, workshop leaders, member demographics, and other metadata about the space and its participants, but lack more granular ratings of user experience. For example, this device allows researchers to track experience ratings for a particular event based on the time and day. Knowing how many participants attended a particular event or expressed interest in attending (based on event sign-ups, mailing lists, Facebook responses, etc.) is even more useful for analysis. In some cases, knowing how many attendees were new vs regulars, or who was leading a workshop may be even more valuable in relation to the thin data of the survey box (e.g., if an event with mostly new attendees received mostly negative responses). In any case, it is a reciprocal relationship in which "thin" pieces of data together enhance the value of each.

Alternate Configurations

In every use case, it's important to recognize that the data only records time, date, and rating. As such, the device is particularly useful as a diagnostic tool. It can highlight successes and flag potential issues or instances of concern that might otherwise go unnoticed. But it doesn't offer qualitative feedback such as follow-up comments or reviews. It can, however, be used to collect richer data over time, similar to a normal survey. The prompt accompanying the box acts as a survey question, and thus shapes the meaning of any data collected. Using a series of prompts over several weeks or months can help triangulate responses, like a less granular multi-question survey. For example, using a prompt of "Please rate your experience today!" for a month followed by "Please rate the cleanliness of the space!" for a second month, and so on, may offer richer data than two months of a general rating. 

Yet another possible reconfiguration is to modify the buttons themselves (simply print new stickers), shifting the range of responses from points on a Likert scale to radio buttons or a simple counter. Colleen Graves (2018) demonstrated this shift in button styles in her project "Makey Your Own Interactive Exit Ticket with Scratch and Makey Makey." Inspired by the same rating systems that inspired my project, Graves (2018) designed two systems for use in the context of elementary schools: a system that tracks how many students from each grade visit the school library, and one that asks students to "Please Rate today's Learning" using options of "Nailed it, I mostly understood, I still have questions, I need more practice." Each of Graves' projects combined paper prototyping with a Makey Makey (a simplified, $45 plug and play microcontroller) and a computer. As such, the Makey Makey-based interface is tied to a laptop or desktop, but it is accessible for kids to design, and easy to quickly customizable. Adding a Raspberry Pi (a small, $35 computer) onto Graves' approach creates another possible configuration that doesn't rely on a tether to a laptop or desktop computer. Jen Fox's "Interactive Survey Game" (2016) provided a demonstration of one possible configuration of such an interface, and also emphasized transparency. Again in Fox's project, the Makey Makey made it easy to rapidly prototype electronic survey interfaces with paper and aluminum foil. The downside is that the interfaces aren't very durable. 

Graves', Fox's, and my approach each offer layers of access to prototyping electronic survey interfaces. All three take advantage of open source hardware and software. Depending on the context of use and the goal of the interface, each may be more or less fitting. A Makey Makey and a laptop facilitate quick paper prototyping of interfaces across a wide range of contexts at little cost. The Makey Makey makes it easy to swap components using alligator clips, and only requires that materials be conductive (one popular example of this is to connect bananas to a Makey Makey in order to create a "banana piano"). This ease of swapping significantly reduces barriers to access. As Graves (2018) demonstrated, it allows elementary students (and adults) to participate in the prototyping process and to involve themselves in the research. However, this solution isn't fitting for situations where a laptop or computer needs to be left unattended for extended periods. Adding a Raspberry Pi resolves the issue of a laptop, making the prototyping process more mobile. Using a Pi also adds a layer of technical knowledge for researchers though, in that they have to configure the Pi properly. Although configuring a Pi is still a relatively accessible process, I preferred the ease of uploading code to an Arduino, as it meant one less variable to troubleshoot. At the same time, using a Raspberry Pi opens possibilities for automatic data processing and web connectivity, making it a good choice for researchers who don't wish to regularly check an SD card for data (or who want to show real-time ratings in a space, for example). In theory, the physical components of my design could be attached to a Makey Makey and Raspberry Pi for a similar price, though the wiring would be less durable. Using an Arduino and off-the-shelf buttons still accounts for customization, but assumes that the electronic aspects of the interface will remain relatively unchanged, adding a layer of durability through mass-produced components and more permanent wiring. That said, there are a few limitations to the current design of this device.

Further Development

Accessibility

One advantage to a DIY approach (leveraging digital fabrication and/or rapid prototyping technologies) is that it's easy to iterate and improve the design of the box, and to tailor it to specific audiences and needs. As Fred Gibbs and Trevor Owens (2012) argue in "Building Better Digital Humanities Tools: Toward broader audiences and user-centered designs," one of the current limitations of digital tools for humanities researchers is their usability. While the survey device I describe in this webtext is not a plug-and-play solution like the HappyOrNot Kiosk, and may therefore not be immediately accessible to the broadest possible range of humanities researchers, it does allow for greater accessibility as the development process continues. In particular, this DIY approach has advantages over mass production when designing for accessibility. In its current 1.0 form, the survey device has the same accessibility limits as commercial versions: the prompt, the buttons, and the feedback light all privilege sighted users. However, creating a more tactile interface and integrating tactile or audio feedback would help improve the accessibility of the design. Adding either 3D printed or laser cut attachments in the shape of smiley/frowny faces to the buttons is a relatively quick improvement to make the interface more tactile. Similarly, given that the Arduino inside the device is modular, adding on a speaker or vibrating motor means only a little soldering and code tweaking. Adding either a vibrational or audio response to coincide with the "Thanks!" LED lighting would provide additional perceptive cues, making the feedback more accessible as well. That said, an audio cue may make more sense in some contexts than in others. For example, an audible beep or "Thanks!" at a library makerspace could be distracting to patrons, in which case a slight vibration might be more rhetorically fitting. A few usability evaluations would help address this concern across contexts of use, and would also help determine if vibrations would impact the electronics of the device over time, which is part of why I'm testing the device in different contexts.

Timekeeping

One of the most frustrating issues with this device is that the internal clock drifts approximately 40–60 seconds weekly. As Adafruit user jboyton (2015) suggested, this inaccuracy is a limitation of the internal hardware (regardless of whether it's powered on). Based on customer responses online, such as jbenedetto84's (2015), a 1–7 second offset per day is common. As jboyton (2015) further explained, this offset can be compounded by battery drain, as well as temperature fluctuations (approximately 1 second/day per 10 degrees Celsius temperature change). While more precise real time clocks exist (for example, the ChronoDot), using one would require building a data logger from scratch. With short sampling times (e.g., over 1–4 weeks) this drift may not warrant correction. However, if it's necessary to measure the precise time of responses over longer periods (e.g., 11:52:01 AM vs 11:52:41 AM over weeks, or 11:30 AM vs 11:45 AM over months), researchers should periodically re-sync the data logger clock or take calibration measurements. As an example, as part of a 16-week long study, I simply calibrated for an offset every time I transferred data from the SD card to a computer. To do so, I pushed a button on the survey, logged the actual time of the button press and compared the actual time with the logged timestamp. In this case, second-accurate results weren't critical. However, given the length of the study, I needed to account for approximately a 10 minute difference over 16 weeks. Because I calculated a weekly offset, I was able to adjust the data automatically in Excel.

Downtime Logging

To my surprise, in one of the spaces I studied, the survey box was unplugged for an unknown length of time. Unfortunately, this left a mystery gap in the response data. Given that people in this particular space only rated their experiences a few times a week, it was difficult to pinpoint when the box was unplugged, for how long, and when it was powered on again. Fortunately, this should be relatively easy to fix with a few additional lines of code (or a battery backup). Simply including the time and date with each new log file would help resolve this issue, as a new log file is generated every time the box is plugged in. Although this fix wouldn't specify when the box became unplugged, it would at least identify each time the box was powered on. Alternatively, writing an hourly or daily diagnostic time to the SD card would help prevent this issue, but at the expense of messier data files.

How to Build One

How to Build An Inexpensive User Satisfaction Survey

Step 1: Materials

Electronics Case Tools
LED backlight
150 Ohm 1/4W resistor
Arduino Uno R3 with Adafruit Data Logging shield attached (assembly instructions) (requires CR1220 12mm 3V coin cell battery)
9 VDC 1000mA regulated switching power adapter
SD card
Wire (Female quick connectors, as shown, make wiring easier. 4 buttons requires 8 connectors.)
Heat shrink tubing
Headers (optional)
• A case (either a Harbor Freight 48 hook key box as shown below, or either of the 3D printed or laser cut box designs in the attachments file below).
• Clear printable sticker paper for 'Thanks!' label and face stickers
• Clear adhesive for stickers (The Avery paper adhesive wasn't sticky enough, so YMMV if using a different brand)
• Packing tape
• Pencil or marker
• Ruler
• Scissors
• Solder
Hot glue gun
Eye protection
Computer and USB cable for programming the Arduino

If Not Using Laser Cut or 3D Printed Box
Dremel tool and cutting blade
Metal file, sandpaper, or Dremel grinding bit
Piece of rubber or silicone for insulating the Arduino. (Hot glue or electrical tape should work fine too)
Grommet (not pictured)
Attachments
 
Materials Top-left: Arcade buttons, LED backlight, Arduino Uno, Data Logging Shield, SD card, power supply. Top-middle: Enclosure, glue, packing tape, ruler, permanent marker, scissors. Top-right: drill with step bit. Center: 150 Ohm 1/4W resistor. Bottom-left: Soldering iron, solder, safety glasses, hot glue gun, wire, heat shrink tube. Bottom-middle: Dremel 3000 with cutting wheel. Bottom-right: Sticker paper printed with smiley and frowning faces and 'Thanks!' labels.
Figure 2. Required materials and supplies

Step 2: Modify the Box (Skip If You're Not Using a Key Box)

Note: Wear appropriate eye, ear, and hand protection when using a rotary tool.

To fit everything inside the box, I had to remove the extra metal sheet inside that would normally hold keys. To do this, I used a Dremel tool to cut the hinge and remove it, as shown.

Left: Opened metal box with inner hinges highlighted for cutting. Right: Opened metal box with inner hinges cut and metal sheet removed.
Figure 3. Preparing the metal box

Step 3: Prepare the Labels

  1. Print or vinyl cut the happy/sad faces and the "Thanks!" labels from the "faces_color.pdf" file (EPS file available for modifying). The PDF has two sets in case of mistakes :)
  2. Seal the labels so that the ink doesn't smudge or fade. I used packing tape to do this.
  3. Cut the stickers to size. The red sad faces will go on the red buttons, and the green happy faces will go on the green buttons. The "Thanks!" sticker will go on the LED backlight, and should be cut to size accordingly.
  4. Note: Do not attach the stickers to the buttons yet, since the buttons still need to be test fit. Step 6 covers how to orient and attach the stickers.
Left: Sticker paper printed with smiley and frowning faces and 'Thanks!' labels. Right: Smiling and frowning face stickers sealed with packing tape and cut out from page, along with a sealed 'Thanks!' sticker.
Figure 4. Printed, sealed, and cut labels

Step 4: Measure

Decide where you want to position the Arduino and mark the location so that you can drill a hole for the power cord later. An Arduino Uno with a data logging shield is 70mm x 53mm x 17mm (2.7in x 2in x 0.65in). 

For the Metal Lock Box I Used For a Custom 3D Printed or Laser cut Enclosure
Print the "paper_template.pdf" from the attachments file in Step 1, align it with the box, and tape it in place.  If you're designing a custom enclosure using 60mm buttons from Adafruit, you'll need 1 inch holes (25.4mm) for the stems. The stabilizing pins are 3.32mm in diameter, and their centers are 37.7mm apart (shown in the specifications on the product page).

Step 5: Drill Holes (Skip If Using 3D Printed or Laser Cut Enclosure)

Note: Only mark the position of the "Thanks!" light. We'll create a small hole for the leads after the button holes are in place and we've checked their fit.

  1. Drill two additional pilot holes to the side of each circle as marked to align with the small guide pins on each button.
  2. Using a step bit or appropriately sized bit, drill holes for the buttons and guide pins.
  3. Check to make sure the buttons fit as you go.
  4. If you're using a metal box, make sure to deburr or file down the edges after drilling.
  5. Drill a small hole for the "Thanks!" light leads (The LED backlight will sit on top of the case. DO NOT cut out the area in the shape of the light).
  6. Drill a hole for the Arduino power cord in the side of the box.
  7. Do a final test after deburring to ensure the buttons still fit.
Left: Top-down view of box with four holes drilled for arcade buttons, a smaller hole for the 'Thanks!' light, and a small hole on the side for power. Center: Rotated view of the box, showing the location of the hole for the power cord on the side of the box. Right: View of holes from inside the box with arcade buttons test fitted into place.
Figure 5. Box with holes drilled

​Step 6: ​Glue the Labels

Now that we've tested that each button fits and sealed the labels, it's time to apply the labels to the buttons and LED. Note: Since the buttons have two pegs to prevent them from spinning, make sure that you align the stickers parallel to the pegs as shown.

  1. Add glue evenly to the back of the face stickers, and then carefully center them on the arcade buttons and let dry for 30 minutes to an hour. Note: It is possible to set the glue faster by carefully using a heat gun on the lowest setting, but it's extremely easy to accidentally melt the sticker by applying heat for too long.
  2. Remove the clear protective film from the front of the LED backlight (NOT the white layers). The front side should be the side facing toward you when the long lead is oriented, as shown in this photo from Adafruit (2018). You can test which side is the front by plugging in the backlight to pin 13 and ground on the Arduino (the default program should blink the LED). Apply glue to the LED sticker and adhere. Let dry for 30 minutes to an hour.
Top: Hand applying glue to the back of stickers. Bottom: Two buttons and two stickers. The left button is paired with a green smiling face sticker aligned parallel to the guide pegs on the underside of the button. The right button has an 'X' through it, showing that the frowning sticker was not properly aligned with the guide pegs.
Figure 6. Label alignment and gluing. Note the position of the guide pegs on the left button

Step 7: Program

If you haven't used the data logging shield before, check this out guide from Earl Bill and lady ada of Adafruit (2018) to familiarize yourself with how it works, and to set the time on the data logging shield!
 
To program your Arduino data logger, upload the 4-button_data_logger sketch (from attachments file in Step 1) to your Arduino.

What the Code Does Settings
This sketch logs the current date and time to an SD card whenever one of the four buttons is pressed and also logs which button was pressed. After each button press, the "Thanks!" light stays lit for as long as you set it to (about 2 seconds by default), and no additional presses are logged during that delay. The data is stored in sequentially numbered text files (up to 100 total). A new file is created each time the Arduino is reset or unplugged. If two buttons are pressed at once, both will be logged. Holding a button will not create multiple entries. To change how long the "Thanks!" light stays lit, adjust the delay in milliseconds in line 146 of the code. For a shorter delay, decrease the number. Keep in mind that there will always be some delay while the Arduino writes to the SD card between button presses.

To change the number of buttons, change "nbts" (line 23) to the number you want, and change the number in "bts4" and "btgs4" to match the value you set for "nbts" (e.g., if nbts = 6, bts6+= and -+btgs6).

To offset the start pin, change the "startpin" variable (line 24). The pins need to be in sequence (e.g. setting "startpin = 4" and wiring to data pins 4,5,6,7 will work, but not 4,6,8,10). By default, the sketch is programmed to log 0 as very happy, 1 as happy, 2 as sad, 3 as very sad, based on the wiring in step 9: Solder. These numbers don't match the data pins in the video because I made a mistake during soldering :)
 
Top: Arduino with data logger shield attached and SD card inserted, connected to a computer via USB cable. Bottom: Screenshot of code with line 156 highlighted to show where to adjust the delay between logging button presses.
Figure 7. Programming the Arduino

Step 8: Dry Run

  1. Insert the button switches into the buttons.
  2. Bend the LED backlight leads and temporarily mount the LED to the outside of the case (use a piece of tape to hold it in place if needed).
  3. Position the Arduino where you marked in Step 4: Measure.
  4. Cut the following wires to length and strip 1/4" of insulation from each:
    • 4 positive wires (one for each button) from the positive terminal (shown below) to the corresponding data pin (0–4) on the Arduino. Note: These will likely be different lengths.
    • 4 ground wires (one for each button) from the bottom negative terminal of the switch (shown below) to the prototyping board of the shield. Note: These will likely be different lengths.
    • 1 positive wire from the positive lead of the LED backlight to pin 8 on the Arduino.
    • 1 ground wire from the negative lead of the LED backlight to one of the Arduino ground pins.
    • 1 jumper wire to connect the second Arduino ground pin to the prototyping board (about 1 inch long).
  5. Cut two pieces of heat shrink tubing for the LED leads.
  6. Label the wires to keep things organized, and then remove the switches from the buttons.
  7. Optional: If you're using a metal enclosure, remove the LED backlight and insert a grommet in the hole for the leads at this point.
Left: Metal box with four smiling and frowning buttons temporarily in place. 'Thanks!' light is held in place with blue painter's tape. Center: View of test fitted arduino and wires cut to length. Right: Five pairs of wire cut to length and attached to button leads.
Figure 8. Test fitting the assembly and cutting wires

​Step 9: Solder

Note: If you plan to reuse the Arduino or data logging shield for another project, solder headers to the shield and then connect the wires rather than permanently soldering the wires. Make sure to measure whether there's clearance for the box to close with the headers attached.

  1. Remove the SD card from the data logging shield, and detach the data logging shield from the Arduino so that you can solder connections on the underside. I tried to skip this part and ended up redoing connections later because of it.
  2. Tin each of the wires you cut in the previous step. Additionally, tin the negative (anode) lead of the LED backlight (the shorter of the two) and one lead of the 150R resistor.
  3. Tightly wrap the untinned lead of the 150R resistor around the positive (cathode) lead of the LED backlight (the longer of the two leads), and then solder them together.
  4. Solder the 150R resistor lead to the wire you cut for the LED backlight positive connection in the previous step.
  5. Solder the negative lead of the LED backlight to the wire you cut for the negative connection in the previous step.
  6. Slip heat shrink tubing over the LED wires and shrink. I also added a second larger piece of heat shrink tubing that covered both leads in order to cover the gap left between the leads and the backlight panel.
  7. Mount the LED backlight to the case with tape.
  8. Solder the positive lead of the LED backlight to pin 8 (I used pin 13 initially, but the onboard LED later created issues with keeping the backlight lit, so I changed the pin).
  9. Solder the negative lead of the LED backlight to GND.
  10. Solder each positive wire from the four switches to data pins 0–3 as shown (the diagram below is correct, but I had to resolder two of the positive wires to different pins than in the photos because I accidentally overheated data pin 1...).
  11. Solder the switch ground wires to the prototyping board of the shield, placing them side by side.
  12. Solder a jumper wire from the second GND pin to a hole next to the switch ground wires on the prototyping board.
  13. Bridge the joints of the four switch ground wires and the jumper wire using solder.
Wiring diagram showing where to connect each component to the data logging shield
Figure 9. Wiring diagram showing where to connect each component to the data logging shield
 
Left: Underside of data logging shield circuit board, demonstrating solder bridging. Right: Resistor lead wound around positive lead of the LED backlight.
Figure 10. Solder bridging and lead winding

​Step 10: Test and Assemble

  1. Reattach the data logging shield to the Arduino and reinsert the SD card.
  2. Being careful not to ground the Arduino to the case, test the circuit by connecting the Arduino to your computer, running the sketch, and testing the buttons. If you open the serial monitor in the Arduino app, you should see a message that the SD card is initialized (as shown below). After that, if you press a button, it should show which button was pressed, and the LED will light for about two seconds.
    • If you encounter any issues, double check the code first, and then check your soldering and other connections to make sure everything is connected properly. For example, I was getting weird garbled output in the serial monitor when logging certain button presses. I checked the code, but determined that I had accidentally overheated data pin 1, because all the other buttons worked. And I initially used pin 13 for the LED, so the LED backlight would only blink briefly, no matter how long I set the delay. I fixed both of these issues by resoldering to new pins.
  3. Tighten the button mounts.
  4. Attach the Arduino to the case using hot glue or mounting screws. Warning: if using a metal case, add insulation as shown to prevent shorts (I used electrical tape, added a thick base of hot glue that I let cool completely, and then added a few dots of hot glue to the mounting holes on the board. Probably overkill, but better to be safe).
  5. Carefully close the case.
  6. Hot glue the LED backlight to the case.
  7. Connect the power cord to the Arduino (optional: add grommet and hot glue).
Arduino serial monitor displaying a real time log of button presses
Figure 11. Real-time log of data button presses
 
Top-left: Inside lid of box shown with electric tape applied to bottom-right corner for insulation. Top-middle: Hot glue applied to electrical tape for further insulation. Top-right: Applying hot glue to arduino base. Bottom-left: View of inside box lid with arduino glued in place. Bottom-middle: Hot gluing 'Thanks!' light onto the front of the box. Bottom-right: Hot gluing power cord into place for added insulation and durability.
Figure 12. Gluing the Arduino into place

​​Step 11: Use It!

Create a compelling sign that gives people a reason to express their happiness by pressing your big arcade buttons. Place it nearby, and plug in the power! 

You can use any spreadsheet software to track satisfaction ratings over time, in response to events, or track responses to more specific questions (e.g. how was the food at the workshop?), and then visualize the data (Plot.ly makes quick plots of ratings over time, and RAWgraphs is good for creating quick visualizations). 

The number of button presses you can log is practically endless. But, the Arduino sketch will create a new numbered logfile each time the Arduino is unplugged or reset, up to 100 files. So, make sure to periodically download the data from the SD card.

Left: Sign displaying text 'Please rate your experience today!' Right: 4-segment pie chart showing approximate percentages of positive and negative responses
Figure 13. Design and print a compelling prompt for the device

References

Adafruit Industries. (n.d.). 1622-02.jpg (1200×900) [Image]. Retrieved October 2, 2018, from https://cdn-shop.adafruit.com/1200x900/1622-02.jpg

Al-Azzawi, Ali. (2019). Dubai happiness agenda: Engineering the happiest city on earth. In W. A. Samad & E. Azar (Eds.), Smart Cities in the Gulf: Current State, Opportunities, and Challenges (pp. 195–221). Singapore: Springer Singapore. https://doi.org/10.1007/978-981-13-2011-8_11

Bierman, A. S., Bubolz, T. A., Fisher, E. S., & Wasson, J. H. (1999). How well does a single question about health predict the financial health of Medicare managed care plans? Effective Clinical Practice: ECP, 2(2), 56–62.

Bill, Earl, & ada, lady. (2018, August 22). Adafruit data logger shield | adafruit learning system. Retrieved October 2, 2018, from https://learn.adafruit.com/adafruit-data-logger-shield/overview

Buck, Amber, Condis, Megan, Prins, Kristin, Brooks-Gillies, Marilee, & Webber, Martha. (Eds.). (2015). Harlot No 14 (2015). Harlot: A Revealing Look at the Arts of Persuasion, (14). Retrieved from http://harlotofthearts.org/index.php/harlot/issue/view/14

Brignull, Harry. (2010). Dark patterns. Retrieved September 28, 2018, from https://darkpatterns.org/

Csikszentmihalyi, Chris. (2012). Sixteen reflective bts. In G. Hertz (Ed.), Critical Making: Sixteen Reflective Bits. Hollywood, California USA: Telharmonium Press. Retrieved March 13, i2019 from http://conceptlab.com/criticalmaking/

Fancher, Patricia. (2018, October 18). 2019 Feminisms and rhetorics conference: Redefining feminist activism – coalition of feminist scholars in the history of rhetoric and composition. Retrieved March 12, 2019, from http://cwshrc.org/blog/2018/10/18/2019-feminisms-and-rhetorics-conference-redefining-feminist-activism/

Faris, Micahel J., Blick, Andrew M., Labriola, Jack T., Hankey, Lesley, May, Jamie, & Mangum, Richard T. (2018). Building rhetoric one bit at a time: A case of maker rhetoric with littleBits. Kairos: A Journal of Rhetoric, Technology, Pedagogy, 22(2). Retrieved March 13, 2019, from http://kairos.technorhetoric.net/22.2/praxis/faris-et-al/situating.html

Fox, Jennifer. (2016, April 18). Interactive survey game. Retrieved September 28, 2018, from https://www.instructables.com/id/Interactive-Survey-Game/

Gibbs, Fred, & Owens, Trevor. (2012). Building Better Digital Humanities Tools: Toward broader audiences and user-centered designs. Digital Humanities Quarterly, 006(2).

Graves, Colleen. (2018a, September 7). Makey your own exit ticket or data tracker. Retrieved September 28, 2018, from https://labz.makeymakey.com/cwists/preview/1391x

Graves, Colleen. (2018b, September 7). Makey your own interactive exit ticket with scratch and Makey Makey. Retrieved September 28, 2018, from https://makeymakey.com/blogs/blog/makey-your-own-interactive-exit-ticket-with-scratch-and-makey-makey

jbenedetto84. (2015, April 20). Re: RTC on data logger shield not very accurate [Forum]. Retrieved September 28, 2018, from https://forums.adafruit.com/viewtopic.php?f=25&t=72112

jboyton. (2015a, April 17). Re: RTC on data logger shield not very accurate [Forum]. Retrieved September 28, 2018, from https://forums.adafruit.com/viewtopic.php?f=25&t=72112

jboyton. (2015b, April 20). Re: RTC on data logger shield not very accurate [Forum]. Retrieved September 28, 2018, from https://forums.adafruit.com/viewtopic.php?f=25&t=72112

Khan, M. Sajid, Woo, Mina, Nam, Kichan, & Chathoth, Prakash K. (2017). Smart city and smart tourism: A case of Dubai. Sustainability, 9(12), 2279. https://doi.org/10.3390/su9122279

Nichols, Jennifer, Melo, Marijel, & Dewland, Jason. (2017). Unifying space and service for makers, entrepreneurs, and digital scholars. Project MUSE, 17, 363–374. https://doi.org/10.1353/pla.2017.0022

Sheridan, David M. (2010). Fabricating consent: Three-dimensional objects as rhetorical compositions. Computers and Composition, 27(4), 249–265. https://doi.org/10.1016/j.compcom.2010.09.005

Shipka, Jody. (2011). Toward a composition made whole (1 edition). Pittsburgh, Pa: University of Pittsburgh Press.

Shivers-McNair, Ann. (2017). 3D Interviewing with researcher POV video: Bodies and knowledge in the making. Kairos: A Journal of Rhetoric, Technology, Pedagogy,21(2). Retrieved March 13, 2019 from http://technorhetoric.net/praxis/tiki-index.php?page=PraxisWiki:_:3D%20Interviewing

Tham, Jason C. K. (2018). Learning from making: A design challenge in technical writing and communication. In Proceedings of the 36th ACM International Conference on the Design of Communication (pp. 25:1–25:2). New York, NY, USA: ACM. https://doi.org/10.1145/3233756.3233935

Trouteaud, Alex R. (2004). How you ask counts: A test of internet-related components of response rates to a web-based survey. Social Science Computer Review, 22(3), 385–392. https://doi.org/10.1177/0894439304265650

West-Puckett, Stephanie. (2013, September 13). ReMaking education: Designing classroom makerspaces for transformative learning. Retrieved March 12, 2019, from https://www.edutopia.org/blog/classroom-makerspaces-transformative-learning-stephanie-west-puckett

1 I follow Chris Csikszentmihalyi’s (2012) act of air quoting “maker,” from his Sixteen Reflective Bits, throughout this webtext, in part as a critique of the term itself, and in recognition of the gender, class, and racial biases the term generally reflects.

 


Created by kristi. Last Modification: Wednesday September 4, 2019 19:41:49 GMT-0000 by kristi.