Friday, August 4, 2017

The benefits of "crowdsourced" research

What is crowdsourced research?  
Briefly, “crowdsourced” research involves several individual researchers who coordinate their resources to accomplish goals that would be difficult to achieve individually. Although there are several different ways in which researchers can work collaboratively, the current blog is focusing on projects where several different researchers each collect data that will be pooled together into a common analysis (e.g., the “Many Labs” projects, Ebersole et al., 2016; Klein et al., 2014; Registered Replication Reports [RRR], Cheung et al., 2016; Wagenmakers et al., 2016; “The Pipeline Project,” Schweinsberg et al., 2016).
Below I try to convince you that crowdsourcing is a useful methodological tool for psychological science and ways that you could choose to get involved. 
Eight benefits of crowdsourced research
First, crowdsourced research can help achieve greater statistical power. A major limiting factor for individual researchers is the available sample of participants for a particular study. Commonly, individual researchers do not have access to a large enough pool of participants, or enough resources (e.g., participant compensation) to gain access to a large enough pool of participants, to complete a properly powered study. Or researchers must collect data for a long period of time to obtain their target sample size. Because crowdsourced research projects involve the aggregation of results from many labs, a major benefit is that such projects have resulted in larger sample sizes and more precise effect size estimates than any of the individual labs that contribute to the project. 
Second, crowdsourced research provides information about the robustness of an effect to minor variations in context. Conclusions from any individual instantiation of an effect (e.g., an effect demonstrated in a single study within a single sample at a single point in time) are inevitably overgeneralized when summarized (e.g., Greenwald, Pratkanis, Leippe, & Baumgardner, 1986). That is, any individual study occurs within an idiosyncratic combination of an indefinite amount of contextual variables, most of which are theoretically irrelevant to the effect (e.g., time of day, slight moment-to-moment variations in the temperature of the room, the color of socks the researcher is wearing, what the seventh participant ate for breakfast the Saturday prior to their study appointment, etc.). Thus, a summary of an effect “overgeneralizes” to contexts beyond what was actually present in the study that is being summarized. And it is only when an effect is tested across several levels and combinations of these myriad contextual variables can information be strongly inferred about the theoretically invariant characteristics of the effect (e.g., the effect is observed across a range of researcher sock colors; thus, the observation of an effect is unlikely to depend on any specific color of socks).  
A benefit of crowdsourced research is the results inherently provide information about whether the effect is detectable across several slightly-different permutations and combinations of contextual variables. Consequently, crowdsourced research allows for stronger inferences to be made about the effect across a range of contexts. Notably, even if a crowdsourced research project “merely” uses samples of undergraduate students in artificial laboratory settings, the overall results of the project would still test whether the effect can be obtained across contexts that slightly vary from sample-to-sample and laboratory-to-laboratory. Although this hypothetical project may not exhaustively test the effect across a wide range of samples and conditions, the results from the overall crowdsourced research project will test the robustness of the effect more than the results from any individual sample within the project.
Third, because the goal of most crowdsourced research is the aggregate or synthesis of the results from several different labs who have agreed to combine their results a priori, another benefit is there is unlikely to be inclusion bias (which would be comparable to publication bias among published studies) within the studies that contribute to the project. Consequently, the overall results from crowdsourced research projects are unlikely to suffer from inclusion bias than any comparable synthesis of already-completed research such as a meta-analysis. Rather, crowdsourced research projects involve several studies that provide estimates that vary around a population effect and are unlikely to systematically include or exclude studies based on those studies’ results. The lack of inclusion bias is because individual contributors to a crowdsourced research project do not need to achieve a particular type of result to be included in the overall analysis. Rather, because the overall project hinges on several contributors each successfully executing methods that are comparable, individual contributors have a motivation to adhere to the agreed-upon methods as closely as possible.
Fourth, and related to the points made in the previous paragraph, because the crowdsourced research project involves the coordination of several labs, it is unlikely there would be post-hoc concealment of methodological details, switching of the planned analyses, filedrawering “failed” studies, optional stopping of data collection, etc. without several other contributors knowing about it. This distribution of knowledge likely makes the final project more transparent and documented than a comparable set of non-crowdsourced studies. In other words, it would literally take a conspiracy to alter the methods or to systematically exclude results from contributing labs of a crowdsourced research project. Consequently, because crowdsourced research projects inherently involve the distribution of knowledge across several individuals, it is reasonable for readers to assume that such projects have strongly adhered to a priori methods. 
Fifth, comparisons of the results from the contributing labs can provide information (but not all information) about how consistently each lab executed the methods. Although cross-lab consistency of results is not inherently an indicator of methodological fidelity, any individual lab that found atypical results (e.g., surprisingly strong or surprisingly weak effects), for whatever reason, would be easily noticeable when compared to the other labs in the crowdsourced research project and should be examined more closely.
Sixth, the nature of crowdsourced research projects means that the methods are already in a form that is transferable to other labs. For example, there would already be a survey that has been demonstrated to be understood by participants from several different labs, there would already be methods that are not designed to be dependent on an idiosyncratic physical feature of one lab, there would be an experiment that has been demonstrated to work on several different computers or is housed online where the study is accessible for anybody with internet access, etc. The transferability of methods does not inherently make methods more appropriate for testing a hypothesis, but it does make it easier for other researchers who were not contributors to the original crowdsourced study to replicate the methods in a future study.
Seventh, although there have been calls to minimize the barriers to publishing research (and thus reducing the negative impact of file drawers; e.g., Nosek & Bar-Anan, 2012), some have opined that research psychologists should be leary of the resulting information overload and strive to publish fewer-but-better papers instead (e.g., Nelson, Simmons, & Simonsohn, 2012). Crowdsourced research seems to address both the file drawer problem and the concern about information overload. Take an RRR as an example. Imagine if each individual contributor in the RRR conducted a close replication of a previously-published effect and tried to publish their effect independently of one another. Also, another researcher gathers these studies and publishes a meta-analysis that described the synthesis of each of those studies. I do not believe that each manuscript would necessarily be publishable on its own. And, even in the unlikely event that several manuscripts each describing one close replication were published, there would be a significant degree of overlap between each of those articles (e.g., the Introduction sections presumably would largely cover the same literature, which would be tiresome for readers). Thus, several publications each describing one close replication of an effect are inefficient for journals who would not want to tax editors and reviewers with several articles with a significant amount of overlap, for the researchers who do not need to write manuscripts that are largely redundant with one another (plus, each manuscript is less publishable as a stand-alone description of one replication attempt), and for readers who should not have to slog through several redundant publications. Crowdsourced research projects provide one highly-informative presentation of the results, readers only need to find and read one manuscript, and editors and reviewers only need to evaluate one manuscript. Also, because the one crowdsourced manuscript would include all of the authors, there is no loss in the amount of authors who get a publication. The result is fewer, but better, publications.*  
Finally, researchers at institutions with modest resources can contribute their resources to high-quality research. Thus, crowdsourced research can be more democratic than traditional research. There are hundreds of researchers who have access to resources (e.g., time, participants, etc.) that may be insufficient individually, but could be incredibly powerful collectively. Or there may be researchers who mentor students who need to accomplish a project in a rigid period of time (e.g., a semester or an academic year) who need projects where the hypotheses and materials are “ready to go.” Crowdsourced research projects ensures that scientific contributions do not only come from researchers who have enough resources to be self-sufficient. 
Three ways to get involved
First, stay up-to-date on upcoming opportunities. Check out StudySwap (https://osf.io/view/studyswap/), which is an online platform to facilitate crowdsourced research. Follow StudySwap on Twitter (@Study_Swap) and like StudySwap on facebook (https://www.facebook.com/StudySwapResearchExchange/). Also follow the RRR group (https://www.psychologicalscience.org/publications/replication) and Psi Chi's NICE project (https://osf.io/juupx/) to hear about upcoming projects for you and your students. Crowdsourced research projects only work when there are lots of potential contributors who are aware of opportunities. 
Second, Chris Chartier and I are excited to announce an upcoming Nexus (i.e., special issue) in Collabra:Psychology on crowdsourced research. Although the official announcement will be coming in the near future, we are starting to identify individuals who may be interested in leading a project. This Nexus will involve a Registered Reports format of crowdsourced research projects we colloquially call Collections^2 (pronounced merely as “collections,” but visually denoted as a type of crowdsourced research by the capital C and the exponent). Collections^2 are projects that involve collections, or groups of researchers, who each collect data that will be pooled together into common analyses (get it? data collection done by a collection of researchers = a Collection^2) and is the same as the crowdsourced projects that were discussed above.**
Collections^2 that would qualify for inclusion in the Nexus can be used to answer all sorts of research questions. Here is a non-exhaustive list of the types of Collections^2 that are possible:
  1. Concurrent operational replication Collections^2: Several researchers simultaneously conduct operational replications of a previously-published effect or of a novel (i.e., not previously-published) effect. These projects can test one effect (such as some of the previous RRRs) or can test several effects within the data collection process (such as the ManyLabs projects). 
  2. Concurrent conceptual replication Collections^2: Projects where there is a common hypothesis that will be simultaneously tested at several different sites, but there are several different operationalizations of how the effect will be tested. The to-be-tested effect can either be previously-published or not. These projects would test the conceptual replicability of an effect and whether the effect generalizes across different operationalizations of the key variables. 
  3. Construct-themed Collections^2: Projects where researchers are interested in a common construct (e.g., trait aggression) and several researchers collect data on several outcomes associated with the target construct. This option is ideal for collections of researchers with a loosely common interest (e.g., several researchers who each have an interest in trait aggression, but who each have hypotheses that are specific to their individual research).
  4. Population-themed Collections^2: Projects where contributing researchers have a common interest the population from which participants will be sampled (e.g., vegans, athiests, left-handers, etc.). This sort of a collaboration would be ideal for researchers who study hard-to-recruit populations and want to maximize participants’ time. 
  5. And several other projects that broadly fall under the umbrella of crowdsourced research (There are lots of smart people out there, we are excited to see what people come up with).

This Nexus will be a Registered Reports format. If you are interested in leading a Collection^2 or just want to bounce an idea off of somebody, then feel free to contact Chris or me to discuss the project. At some point in the near future, there will be an official call to submit Collections^2 proposals and lead authors can submit their ideas (they do not need to have all of the contributing labs identified at the point of the proposal). We believe the Registered Reports is especially good for these Collection^2 proposals. Collections^2 include a lot of resources, so we want to avoid any foreseeable mistakes prior to the investment of these resources. And we believe that having an In-Principle Acceptance is critical for the proposing authors to effectively recruit contributing labs to join a Collection^2. 
If you are interested in being the lead author on a Collection^2 for the Collabra:Psychology Nexus you can contact Chris or me. Or keep an eye out for the official call for proposals coming soon. 
Third, if you do not want to lead a project, consider being a contributing lab to a Collection^2 for the Collabra:Psychology Nexus on crowdsourced research. Remember, these Collections^2 will have an In-Principle Acceptance, so studies that are successfully executed will be published. Being a contributor would be ideal for projects that are on a strict timeline (e.g., an honor’s thesis, first-year graduate student projects, etc.). Keep an eye out for announcements and help pass the word along. 

*There is the issue of author order where fewer authors get to be first authors. However, when there are several authors on a manuscript, the emphasis is rightly placed on the effect rather than the individual(s) who produced the effect.

**The general idea of Collections2 has been referred to “crowdsourced research projects,” as we did above, or elsewhere as “concurrent replications” (https://rolfzwaan.blogspot.com/2017/05/concurrent-replication.html). We like the term Collections2 because “crowdsourced research projects” are a more general class of research that does not necessarily require multi-site data collection efforts. We also believe the name “concurrent replications” may imply this is a method that is only used in replication attempts of previously-published effects. Also, the name “concurrent replication” may imply that all researchers use the same variable operationalizations across sites. Although concurrent replications can be several operational replications of a previously-published effect, they are not inherently operational replications of previously-published effects. Thus, we believe that Collections2 are more specific than “crowdsourced research projects” and more flexible than what may be implied by the name “concurrent replication.”  



Friday, May 19, 2017

3 useful habits for your research workflow

I chronically tinker with my research workflow. I try to find better ways to brainstorm, organize my schedule, manage my time, manage my files (e.g., datafiles, R code, manuscripts, etc.), read and synthesize research articles, etc. In some ways, I am always in a state of self-experimentation: I find an idea, make a change, and then reflect on whether that change was helpful. Some of these changes have "stuck" and become part of my research workflow. 

Recently I have been reflecting on which of my research workflow habits have proven useful and stuck with me over the (relatively) long haul. Here are my current top 3.

Habit #1: Making 1 substantive change per day on an active writing project

Researchers are writers and writing takes time. However, academic writing is a marathon, not a sprint, so academic writing takes a lot of time. It is not uncommon for some of my writing projects to be stretched out over the course of months and sometimes years. I don't know if this makes me a slow writer, but this is the pace at which I can write good academic prose. If I was less diligent, this timeline could be stretched out even further.

One habit that keeps me on track is to have an active writing project like a manuscript or a grant and commit to making one substantive change each day until the project is completed. Just one change. Even if you only have 5 minutes on a given day, that is sufficient time to open up your writing project, start reading, and make one substantive change. This could be making a sentence more concise, finding ways to smooth a transition between two related ideas, or replacing an imprecise adjective with a more appropriate one. Typically, when I make my one change for the day I end up writing for a longer period of time. The whole point of this habit is that "one change is more than none change."

Committing to one change per day is helpful because it keeps the project moving forward. It is a horrible feeling when you want to get a manuscript out the door and it has sat idle for 2 months. Where did the time go? Then you think about how much collective time you spent on Twitter and you wish you could have all of that time back in one big chunk. Sigh!

Habit #2: Learn to juggle

There is a saying that goes "being a good juggler is to be a good thrower." As a researcher, I am always handling several projects that are happening in parallel. Each of these projects requires a sequence of actions. Every now and then (like once a week), you need to assess your active projects and think about the current statuses and trajectories of each of these projects. Which balls are suspended in the air? Which balls are falling and require your immediate attention? Which balls can be thrown back up into the air? Are there any balls you can get rid of?

For example, preparing an IRB application requires you to accomplish a few activities (e.g., write the application, gather the stimuli, etc.), but once the IRB application is submitted you are merely waiting for approval; there is nothing that you can actively do with the application after it is submitted. Suppose you are at the beginning stages of a project and you need to do two activities: (a) write an IRB application and (b) program a study. It may make more sense to write the IRB application first and then, while the IRB application is being reviewed, take the time to program the study rather than vice versa. While you are programming the study, the IRB application review is happening in parallel. This is an example of "throwing" the IRB application ball so you can focus on the study programming ball.

This example seems obvious, but the juggling gets more complex as you get more balls in the air. Regularly assess all of your active projects and identify your next throw. Over time you begin to identify which throws are good throws. For me, good throws are either submissions (IRB applications, manuscript submissions, grant submissions, etc.) or getting feedback to co-authors because those projects can move forward at the same time I am focusing on doing other activities. For example, if there is a manuscript that is 95% complete, I focus my energies on the last 5%. Once the manuscript is submitted I can turn my attention to other things while that ball is suspended in air (i.e., the manuscript is being peer-reviewed). The habit that I have developed is nearly-completed manuscripts and providing feedback to co-authors are priorities.

The key to making this habit work is to take the time and strategically choose your next throw. There is a big difference between the rhythm, cadence, and zen of a juggler and the chaos, stress, and frustration of whack-a-mole.

Habit #3: Clear the clutter

At the beginning of this year I wanted to make a small change to reduce the amount of emails I receive. I used to get a lot of mass emails from places such as Twitter notifications, TurboTax, the American Legion, Honeywell thermostats (seriously!), etc. I never read these emails. Never! Now, whenever I get an automated email that I know I will never read, I go to the bottom of the email and find the "unsubscribe" link in the fine print. I take the 5 seconds to unsubscribe because I know that the 5 seconds I spend now will be repaid with minutes of my future time. I probably get 50% fewer emails now. Merely unsubscribing from mass emails has given me enough free time to make my one substantial change per day (Habit #1 above).

Here's how you can immediately incorporate these habits into your research workflow. First, assess your current projects and identify if there are any "good throws" you can make. Is there a manuscript that if you really, really focus on, you could get submitted in the next week? Is there a draft of a manuscript you could get returned to a co-author if you spent the afternoon in focused writing? Commit to executing one good throw. Second, identify a writing project that you will commit to writing on every single day. This can be your "good throw" project from the first step or something else altogether. Try to write on this project every day for a week. What do you have to lose? My prediction is that you will notice the progress and you won't want to stop making your daily substantive change. Finally, commit to unsubscribing from mass/junk emails as they come into your inbox. Just do it. You will notice a steady decrease in the amount of clutter in your inbox (and fewer distractions) as time goes on.

Good luck and have a productive day.




Friday, May 12, 2017

Academic Craftsmanship

Let me share three short stories.

Story 1: Steve Jobs was obsessed with the design of his products. When designing the first Macintosh, Jobs was adamant about the circuit boards being neat and orderly. The circuit boards! The innards of the computer! My guess is that 99% of users never looked inside the computer, and surely several of the 1% who did look inside never noticed the care and skill that went into making the circuit board look nice. Sure, it may have looked like an orderly circuit board, and it may seem like a waste of resources because making the circuit board orderly does not inherently improve the performance of the computer. But it is this concern about excellence and quality being carried throughout all of the product, inside and out, not just the part of the product that most users see, as being essential to what made the Mac the Mac.

Story 2: My nephew loves Legos. At a recent family function, I vividly remember him sitting on the floor methodically assembling his Lego model. His focus was intense. He was in a state of flow. He couldn’t care less about whether anybody was watching him work; he was on a mission to create something awesome. He’d look at the schematic, find the next piece, and put the piece in the right spot. Snap! Repeat! After the last step, looking at what he assembled with his own two hands, he felt like Michelangelo just unveiled the David. He loves building his Legos because the more he does it, the better he gets. 
  
Story 3: Some graduate student is in a lab somewhere right now tinkering with ggplots on her laptop. She tries out different shapes in her scatter plot. Now different colors. Is the font too big? Too small? Should I use theme_minimal() or theme_bw()? What location of the legend makes it easiest for a reader to intuit the essential information from the figure? After hours of tinkering, honing, polishing, she creates a figure that is just right. When she presents that figure, she glances at the audience’s reaction to her masterpiece.  

What do these three stories have in common? Craftsmanship.

Today I want to give a nod to the often overlooked academic craftsmanship that I see in my colleagues’ work. You know, the little things that researchers do in the process of creating their research products that give them pride. The little things that make a merely publishable manuscript into scientific poetry, an adequate figure into a piece of art, and an ordinary lecture into the academic version of the Beatles' Sgt. Pepper's Lonely Heart's Club Band

Let me first stake a flag in the ground before the rabble gets aroused. When I say academic craftsmanship, I do not mean “flair.” Even the craftiest craftsman who ever crafted a craft is incapable of consistently producing significant results with N = 20. Also, when I say academic craftsmanship, I do not mean having a knack for being able to “tell a good story” to an editor and three anonymous reviewers (although that does seem to be skill that some people have developed). Craftsmanship cannot compensate for vague hypotheses or poor inferences. When I say academic craftsmanship, I simply mean the details that take care, patience, and skill that evoke a sense of pride and satisfaction.

Here is one of my favorite examples of academic craftsmanship.

Check out the correlation graph between the original effect size and the replication effect size for the Reproducibility Project: Psychology (http://shinyapps.org/apps/RGraphCompendium/index.php#reproducibility-project-the-correlation-graph ). First off, the overall figure is packed with information—there is the scatterplot, the reference line for a replication effect size of zero and a reference line for a slope of 1 (i.e., original effect size = replication effect size), the density plots on the upper and right borders of the scatterplot, rug marks for individual points, the sizes of the points correspond to replication power, the colors of the points correspond the p-values, etc.—but overall the figure amazingly does not seem cluttered. The essential information is intuitive and easily consumable. There are details such as the color of the points that match the color of the density plots that match the color of the rug ticks. Matching colors seems like the obvious choice, yet somebody had to intentionally make these decisions. You can breathe in the overall pattern of results without much effort. Informative, clean-looking, intuitive. This is a hard combination to execute successfully.

After seeing this figure, most people probably think “big deal, how else would you make this figure?” Believe me, I once spent 90 minutes at an SPSP poster session shaking my head at a horrible figure! It was ugly. It was not intuitive. It was my poster.

Now let’s look under the hood. Open up the R-code that accompanies this figure. Notice how there is annotation throughout the code; not too much, but just enough. Notice the subtleties in the code such as the use of white space between lines to avoid looking cluttered. Notice how major sections of the code are marked like this:

########################
# FIGURE 3
# EFFECT SIZE DENSITY PLOTS -------------------------------------------------------------
########################
The series of hashes and the use of CAPS is effective in visually marking this major section. Does this level of care make the R-code run better? Not one bit. However, it is extremely helpful to the reader. This clean R-code is akin to the orderly circuit board in the Mac.

This is just one example. But I see craftsmanship all over the place. A clever metaphor, a nicely worded results section, the satisfaction of listening to the cadence of a well-rehearsed lecture, etc. Perhaps I will share more of these examples in the future. For now I only have one request. If this post is discussed on social media, I would like people to share their favorite examples of academic craftsmanship. 

Monday, May 8, 2017

All aggression is instrumental


Aggression is commonly defined as a behavior done with the intent to harm another individual who is motivated to avoid receiving the behavior. Some researchers go further and try to classify aggression as being either "reactive aggression" or "instrumental aggression." I do not believe this distinction is useful.

Briefly, reactive aggression is supposedly an impulsive aggressive behavior in response to a provocation or instigation and is typically accompanied by feelings of anger or hostility. The supposed goal of reactive aggression is merely to "cause harm" to the recipient of the behavior. Think of snapping at another person in the heat-of-the-moment. Instrumental aggression is supposedly an aggressive behavior that is enacted to achieve a particular goal.  Think of a bank robber who shoots the guard while trying to make a getaway. 

Several researchers have pointed out that this distinction is difficult, if not impossible, to make (e.g., Bushman & Anderson, 2002; Tedeschi & Quigley, 1999). I agree. With a little thought, one can see that "snapping" at another person can be used to achieve several goals such as restoring a perceived slight to one's reputation or exerting social control. Thus, the above example of reactive aggression also can be construed as instrumental. Similarly, one also can see that shooting a bank robber probably was in response to some feature of the situation such as the perception that the guard was impeding the goal of successfully executing the robbery.  Thus, the above example of instrumental aggression can be construed as being in response to something and, thus, reactive.

Wait! Am I saying that snapping at another person is the same as a bank robber shooting the guard? No. These are very different behaviors, but the distinctions is not that one is "reactive" and one is "instrumental."

The argument that the reactive-instrumental distinction is a false distinctions is fairly simple. Aggression is, by definition, a behavior that was done intentionally (i.e., non-accidentally). Intentional behaviors are used to achieve social motives. Thus, aggression is one specific type of intentional behavior that is used to achieve social motives. What are some examples of social motives that can be achieved with aggressive behaviors? Protecting oneself, acquiring resources, restoring one's reputation, enforcing a violated social norm, etc.

Further, the belief that aggression can be done "to cause harm" is logically incorrect. Because the definition of aggression requires the aggressive behaviors to have been done with intent and with the belief the recipient wants to avoid the behavior, some believe this definition implies that “causing harm” can be the end goal of the behavior rather than merely a means to achieving some other ends. Therefore, "causing harm" can seemingly be the goal behind reactive aggression. Although this is a common belief, this conflates the definitional criteria of aggression with the motive for why an individual would use an aggressive behavior. This is an easy conflation to make because “to cause harm” seems like a reasonable and satisfactory answer to the question “why did this person behave aggressively?” However, this only seems like a satisfactory answer, but it's not. One cannot explain the causes of a phenomenon (aggression) merely by referring to a necessary component of the phenomenon (an intentionally-caused harmful behavior): A person who behaves aggressively did so with the intent to harm the recipient by definition.

I sincerely hope that we can move beyond the reactive-instrumental definition because I do not believe it is a scientifically useful distinction. Aggression is one behavior in our repertoire of behaviors we use to navigate our complex social environments. All aggression is instrumental. 

Tuesday, February 21, 2017

"Lab-based measure of aggression" are to "real aggression" what college students are to all humans


Aggression is a common feature of social interactions.  Therefore, it is important for social scientists to develop a well-rounded understanding of this phenomenon.  One valuable approach to understanding aggression is laboratory-based research, which requires researchers to have usable and valid methods for measuring aggression in laboratory settings.  However, behaviors that are clearly aggressive, such as one person forcefully striking another person with a weapon, are fraught with ethical and safety considerations for both participants and researchers.  Such behaviors are, therefore, not a viable option for displayed aggression within lab-based research.  For these reasons, aggression researchers have developed a repertoire of tasks that purportedly measure aggression, are believed to be safe for participants and researchers, and are ethically-palatable.  I collectively refer to these tasks as “lab-based aggression paradigms.”  The major concern herein is whether the behaviors exhibited within lab-based measures of aggression are representative of "real" aggression. 

A common definition of aggression is “a behavior done with the intent to harm an individual who is motivated to avoid receiving that behavior” (Baron & Richardson, 1994, p. 7). If one adheres to this definition, a behavior is considered aggressive when both (a) a harmful behavior has occurred and (b) the behavior was done (i) with intent to harm the target and (ii) the belief the target wanted to avoid receiving the behavior. A strength of this definition is the clear demarcation between harmful behaviors that are not aggressive (i.e., a dentist who causes pain in the process of pulling the tooth of a patient; inflicting consensual pain for sexual pleasure, etc.) and harmful behaviors that are aggressive (i.e., punching another person out of anger; yelling at another person and causing a fear response, etc.). 

As hinted to above, the degree of "harm" that is permissible within lab-based settings is very mild. In fact, the lower bound of harmfulness at which behaviors become unambiguously aggressive is likely the upper bound of harmfulness that is permissible within laboratory settings.  

Extending Baron and Richardson’s (1994) definition, Parrot and Giancola (2007) proposed a taxonomy of how such aggressive behaviors may manifest. Within their taxonomy, aggressive behaviors vary along the orthogonal dimensions of direct versus indirect expressions and active versus passive expressions. For example, a physical fight would be considered a direct and active form of physical aggression whereas not correcting knowingly-false gossip would be considered an indirect and passive form of verbal aggression (to the extent the individual believes their inaction will indirectly harm a target individual). Because Parrot and Giancola strongly adhere to the definition of aggression proposed by Baron and Richardson, each of these forms of aggression are still required to meet the criteria described in the previous paragraph. The purported usefulness of this taxonomy is that factors that incite one form of aggression may not incite other forms of aggression. Thus, Parrot and Giancola assert that using their taxonomy to classify the different behavioral manifestations of aggression, and which antecedents causes those different behavioral manifestations, will lead to a nuanced understanding of the causes and forms of aggression.

The first dimension of Parrot and Giancola’s (2007) taxonomy is the direct versus indirect nature of the aggressive behavior. In describing the distinction between direct and indirect aggression, Parrot and Giancola state that direct aggression involves “face-to-face interactions in which the perpetrator is easily identifiable by the victim. In contrast, indirect aggression is delivered more circuitously, and the perpetrator is able to remain unidentified and thereby avoid accusation, direct confrontation, and/or counterattack from the target” (p. 287). However, several lab-based aggression paradigms seemingly have features of both direct and indirect forms of aggression. Many of these paradigms involve contrived interactions where participants communicate with a generic “other participant,” for example, via computer or by evaluating one another’s essays. These contrived interactions are not really face-to-face and they are not really anonymous. So the behaviors within lab-based aggression paradigms are not cleanly classified as being either direct or indirect within Parrot and Giancola’s taxonomy. 

Similarly, participants’ behaviors exhibited within lab-based aggression paradigms are often not “directly” transmitted to the recipient of those behaviors. For example, participants do not make physical contact with their interaction partner at any point within these paradigms. The consequences of participants’ behaviors are often transmitted to the recipient via the ostensible features of the study in which they are participating. For example, in one lab-based aggression paradigm, participants’ harmful behavior is selecting how long another participant must submerge their hand in ice water (Pederson, Vasquez, Bartholow, Grosvenor, & Truong, 2014). Therefore, participants must believe (a) they can harm the recipient by varying how long they tell the experimenter to have the recipient hold their hand in ice water, (b) that a longer period of time causes more harm, and (c) the experimenter will successfully execute the harmful behavior at a later point in time.

Collectively then, it is questionable whether behaviors within lab-based aggression paradigms are considered “direct.” Nevertheless, it is clear that these behaviors do not include face-to-face aggression or physical aggression that includes direct physical contact. And the time of many of the behaviors within lab-based aggression paradigms are asynchronous with the (ostensible) delivery of harm to the recipient.

The second dimension of Parrot and Giancola’s (2007) taxonomy is the active versus passive nature of the behavior. Active aggression involves an individual actively engaging in a behavior that harms the recipient. In contrast, passive aggression is characterized by participants’ lack of action that is believed to cause harm to the recipient. All of the major lab-based aggression paradigms involve behaviors that are considered active.

In summary, within lab-based aggression paradigms, the harmfulness of the behaviors is on the extreme low end of the range of possible harmfulness, participants may believe their behaviors will only cause mild amounts of harm, participants may believe the recipient may only be mildly motivated to avoid the behaviors, and the form of participants’ behaviors may only cover a limited amount of the conceptual space of possible forms of aggression. Collectively, the behaviors exhibited in lab-based aggression paradigms seem to be limited and unrepresentative of the multi-faceted nature of aggression.

Is this potential un-representativeness a problem? On the one hand, the relationship between the behaviors within lab-based measures of aggression and "real" aggressive behaviors is like the relationship between a convenience sample of college students and "all humans."  The former is not a representative sample of the latter, therefore, the generalizability from the former to the latter is potentially biased. On the other hand, to the extent that the behaviors exhibited within lab-based aggression paradigms are valid instances of very mild and specific forms of aggression, lab-based research has a valuable place within a robust science of aggressive behaviors. 

Wednesday, January 11, 2017

A glimpse into my academic writing habits

The other day I was talking to a student who was interested in my approach to academic writing. Where do I write? When do I write? How often do I write? Etc. Later, this student expressed that our conversation was helpful. Here is the gist of my response. I hope you find at least one thing helpful.

I am not a naturally gifted writer, so producing writing that is considered a "scientific contribution" requires my sustained and focused mental effort. So the first thing I do is ensure there is time in my schedule to write. The second thing I do is ensure that I fill that time with cognitively-demanding and mentally-focused writing. Spend enough time doing mentally-focused writing: It's really that easy. 

Perhaps it is because I come from a family of dairy farmers, or perhaps it is from my time in the military, but I am an early riser. I typically wake around 5 AM (except for holidays, vacations, etc.). From 5'ish until 6'ish I engage in what I call "deep writing" (inspired by the concept of "deep work": http://calnewport.com/books/deep-work/). My morning writing time takes the same amount of time as drinking one cup of coffee.

Deep writing is not superficial writing. During this time I don't just make bullet points or do mundane tasks like checking references or formatting a table. I focus intensely on the content of what I am writing. Is my writing clear? Is my writing accurate? Is my writing precise? During this time I am not checking my email or thinking of what is on my schedule for the day. It sometimes feels like a mental fight. The second I sharpen my focus, my mind seems to want me to check my email. Sometimes I am literally staring at the screen, but not doing any deep thinking. If I catch myself being unfocused, I refocus on the task of writing. There is a level of focus that my mind seems to be comfortable at. I try to push myself just past this point of comfort so that I am effortfully immersed in my writing. This is both hard work and extremely satisfying. I imagine this is the same satisfaction that artists get out of engaging in their work.

Given that I only do this for about an hour, and given the level of focus that I try to invest, there are some mornings where I only work on a single paragraph. That's OK, my only goal is that I make at least one substantive change to what I am working on every morning. This goal of one substantive change is sustainable and attainable. It is a small goal, yet it ensures that I am making steady progress on whatever I am writing. No matter what else happens the rest of the day, I know that my current writing project is moving forward. I also find that mornings work best for me to engage in deep writing. My mind tires during the day and I find it harder to really intensify my writing focus as the day drags on.

There are mornings where I wander onto Twitter or I check the news and I don't make my daily substantive change. These days bother me.

After my deep writing time, I get ready for the rest of my day.

When I get to my office I usually check my email right away. This is probably not an ideal habit, but I am working on improving it. I respond to quick emails and then check my calendar. I have time blocked off for my meetings, classes, conference calls, etc. I also have time blocked off time to engage in more deep writing. Some semesters I can only block off a one-hour writing chunk here and a three-hour writing chunk there, but I always put writing time onto my schedule. Always! It is a habit I developed in graduate school and it has served me well ever since. I like different environments to do my writing. I typically write in my office. I close out of my email. I don't play any music. I only use the internet for looking up word definitions, finding articles, etc. Sometimes I write at the library or at Starbucks to minimize unscheduled interruptions. No matter where I am, I try to push myself to focus hard and produce the best writing that I am capable of during this time.

I also block off two hours each week for "professional development" where I read a chapter out of a stats text book or I try to learn a new skill. Currently I am using my professional development time to learn RMarkdown/knitr. During this time I don't do anything else but focus on the new skill I am trying to develop. I also block off an hour each day for "busy work" where I do things like scan documents, fill out travel vouchers, clean off my desk, etc. During my "busy work" time I reward myself by playing music (I am currently listening to Bob Dylan).

Blocking off time for these non-writing activities helps me protect my writing time and allows me to mentally focus during my writing time. However, there are days when unexpected tasks arise, when I need more time than I allotted to complete a task, etc. Some days I miss my writing time. These days bother me.

I try to end my work day in the late afternoon. Most days I have invested enough focus on different activities that I am mentally spent. Some evenings I write. Writing in the evenings typically consists of superficial tasks like light editing because I usually don't have enough mental energy to engage in deep writing. Sometimes in the evenings I am thinking about the overall structure of a manuscript or if there is an apt metaphor that I could incorporate into my writing. But I try to protect my evenings for non-work to the same extent I protect my writing time during the days. I also try to do non-academic reading in the evenings. I find the different style and pace of non-academic writing to be helpful in training my ear to hear well-written prose.

When I describe my writing schedule, people often think that I have the luxury of being able to block off regular writing time as if I am not otherwise busy. However, I feel like it requires a lot of effort to set aside writing time and then selfishly protect it. Without this effort of imposing control over my time, my schedule would be overrun and hectic.

That's it. No secrets. No magic solution. No neat productivity tricks. Just planning and focus. In the end, am I the best writer? No. Am I the best writer that I can be? I am trying. 

Tuesday, October 25, 2016

(Fast) Food for Thought

Often I find myself walking to a local fast food establishment for lunch. The staff there is excellent: They keep the place clean, they always greet me with a smile, and they make delicious food. A few years ago, this particular fast food chain had a string of bad press where it was discovered that a very small number of employees were doing some unsavory things to customers’ food.

I felt bad for the staff at my local restaurant. They had no association with these trouble-makers other than they happened to work for the same restaurant chain, just like thousands of other individual employees. After the news broke, some customers were worried about what was happening behind closed doors in their local restaurant (e.g., Is some bad employee doing something unsanitary to my lunch?). And the staff was probably concerned about being perceived as being one of the trouble-makers (e.g., Do my customers think that I am doing something unsanitary to their lunch?). A few bad news stories ruined the whole employee-customer relationship.
 
The response by my local franchisee was simple and effective: They modified the store to have an open-kitchen design (http://business.time.com/2012/08/20/nothing-to-hide-why-restaurants-embrace-the-open-kitchen/). Now, I can order my lunch and watch the employees prepare my food. I can see into the kitchen and see exactly who is handling my food and how they are handling my food. It is transparent. I suspect the staff likes the open-kitchen concept too. They know that if they are following the proper procedures that customers will not erroneously suspect them of doing something unsavory to their food. By opening up the food preparation process, the whole employee-customer relationship was improved. Now, customers can receive their lunch with the confidence that it was made properly and the staff can provide customers their lunch with the confidence that customers are not suspicious.

I also suspect the open-kitchen concept had several secondary benefits too. For example, the staff probably keeps the kitchen cleaner and avoids cutting obvious corners when they know they are in plain sight of customers. When I go to a different restaurant that still has a “closed-kitchen” design, I wonder what I would see if I could peer into their kitchen. Consequently, all else being equal, I choose open-kitchen establishments over closed-kitchen establishments. Open-kitchen designs are good for the bottom line.

The parallels between the open-kitchen design and "open science" are obvious. As researchers, we produce information that other people consume and we consume information that other people produce.


Here is some (fast) food for thought. As a producer of research, would you feel comfortable allowing your consumers to transparently see your research workflow? As a consumer of research, if you were given the choice between consuming research from an open-science establishment or a closed-science establishment, which would you choose?