Abstract

This research examines how cognitive bias manifests in the design activities of graduate student design teams, with a particular focus on how to uncover evidence of these biases through survey-based data collection. After identifying bias in design teams, this work discusses those biases with consideration for the intent of error management, through the lens of adaptive rationality. Data were collected in one graduate-level design course across nine design teams over the course of a semester-long project. Results are shown for five different types of bias: bandwagon, availability, status quo, ownership, and hindsight biases. The conclusions drawn are based on trends and statistical correlations from survey data, as well as course deliverables. This work serves as a starting point for highlighting the most common forms of bias in design teams, with the goal of developing ways in which to mitigate those biases in future work.

1 Introduction and Background

Adaptive rationality states that cognitive biases stem from evolutionary survival strategies, which can be broken down into three (not mutually exclusive) categories: heuristics, error management, or experimental artifacts [1,2]. Heuristics are rules of thumb used by designers to reach satisfactory solutions. Identification of heuristics in design has been studied in detail by the authors [3,4]. This work focuses on identifying error management bias effects in design, defined by Haselton as found when one plans toward the least costly error [1]. For example, this may occur when one prefers to encounter a false positive over a false negative (or more simply stated as a false alarm over a miss). Examples of these biases and their hypothetical influence in design are shown in Table 1.

Table 1

Examples of hypothetical cognitive biases in design

Cognitive biasDescriptionHypothetical manifestation in design process
Ownership bias [57]Tendency to attribute increased value to an owned entity
  • Bias toward concepts developed by oneself, when compared against those developed by others, leading to pursuit of concepts that may not ultimately be the best or may not surpass those already available to end users

Status quo bias [810]Tendency to select a default option when one is present
  • Bias to maintain status quo solution or similar, influencing benchmarking and ideation

  • Bias toward initially generated concepts, and against iteration or further development of initial concept

Availability bias [11,12]Making judgments based on the most available information in memory
  • Misinformed perceptions of a market or problem space based on immediate association with that market

  • Misinformed design decisions based on the most available user testing population

Bandwagon effect [1316]Tendency to support a decision without proof of its value
  • Bias toward falling in line with the most popular design decisions rather than vocalizing opposition in a team setting

Hindsight bias [11,12,17]A belief that the outcome of an event was predictable or more likely, only after having knowledge of the outcome
  • Bias toward a belief that unforeseen circumstances, such as negative user feedback or a missed latent customer need, should have been easily avoided

Cognitive biasDescriptionHypothetical manifestation in design process
Ownership bias [57]Tendency to attribute increased value to an owned entity
  • Bias toward concepts developed by oneself, when compared against those developed by others, leading to pursuit of concepts that may not ultimately be the best or may not surpass those already available to end users

Status quo bias [810]Tendency to select a default option when one is present
  • Bias to maintain status quo solution or similar, influencing benchmarking and ideation

  • Bias toward initially generated concepts, and against iteration or further development of initial concept

Availability bias [11,12]Making judgments based on the most available information in memory
  • Misinformed perceptions of a market or problem space based on immediate association with that market

  • Misinformed design decisions based on the most available user testing population

Bandwagon effect [1316]Tendency to support a decision without proof of its value
  • Bias toward falling in line with the most popular design decisions rather than vocalizing opposition in a team setting

Hindsight bias [11,12,17]A belief that the outcome of an event was predictable or more likely, only after having knowledge of the outcome
  • Bias toward a belief that unforeseen circumstances, such as negative user feedback or a missed latent customer need, should have been easily avoided

When considering error management biases, be they optimistic or paranoid depending on the context, it is important to consider the evolutionary context to which they relate in order to hypothesize. In design practice, error management may relate to things like self-preservation of one’s livelihood or conservation of resources. It is also key to eliminate or minimize experiment artifacts, which are biases that result from an experimental design that compares human behavior to a “rational” or “optimal” choice that is not appropriate for the context or uses a problem that is inappropriate within an evolutionary context [1,2]. The key to reducing experimental artifacts is not necessarily changing the problem, but rather considering human decision-making in the context of evolutionary survival, and relating the problem or context being examined to a more evolutionarily relevant situation. In many cases, apparently biased decisions become logical choices with this new lens. Experimental artifacts are considered in this work as a caution in experimental design and results interpretation, rather than a crucial component of the taxonomy of biases.

When approaching the study of cognitive biases in design, it is important not only to recognize when, where, and how these biases manifest but also the potential impact of the biases on the design outcomes—which may not necessarily be negative. While cognitive bias has been studied and documented thoroughly in the field of psychology and cognitive science, there is a benefit to extension with engineering designers. First, engineering designers are a specific population that has been shown to differ from the general population, which may cause cognitive biases to manifest differently. For example, Williams et al. show that cognitive changes occur in students after design education experiences, including a change in the focus of their design processes toward functionality [18]. In their training, engineering designers develop significantly improved spatial and visualization skills compared to the general public [19]. In addition, many cognitive differences between expert and novice designers have been found, which merit studies comparing the role of cognitive biases in these distinct populations [20]. Stanovich and West show that cognitive ability correlates with a tendency to avoid some cognitive biases [21], which may have implications for engineering designers, a more highly trained subset of the general population. Thus, this work is not a replication of prior work, but an expansion on the existing literature. Lastly, as discussed earlier in the experimental artifacts section, many studies have been performed in unnatural environments or using inappropriate norms of comparison. By studying engineering designers during the design process in situ, we can observe subjects in their natural working environment and make sure to consider their cognition and decision-making in the context of adaptive rationality [1]. By studying cognitive bias specifically within engineering design, the findings will be directly relevant and meaningful to design theory and practice—a valuable contribution to the state of knowledge in the field of design.

Cognitive bias in design has been examined by a few researchers, but there is still plenty of unexplored territories. Some researchers have already investigated the evidence for confirmation bias in designers and approaches for debiasing [12,2224]. Viswanathan and Linsey [25] studied the link between fixation and the sunk cost effect, uncovering that sunk cost bias could be a major driver or cause of fixation during early stage design. They showed that physical prototyping that requires more time and energy investment led to lower novelty and variety of ideas, and more fixation [25]. Zheng et al. found that concept selection was significantly impacted by the expectations that design students had for their concepts, indicating evidence of cognitive bias in early stage design decision-making [26]. Hindsight bias has been studied in relation to trust in automation, as well as foresight in complex systems and organizations [27,28].

In engineering design decision-making, Vermillion et al. documented framing bias, showing that subjects were more likely to select the less risky option in positively framed scenarios but were risk neutral in negatively framed scenarios [29]. Toh et al. examined ownership bias and its relationship to gender, finding that male designers tended to exhibit ownership bias in concept selection, while women designers tended to exhibit the opposite bias (the halo effect) by selecting more ideas that were not their own [30]. Ownership bias has been studied from the perspective of design professionals, as well as crowd-based decision-making. [31,32]. Austin-Breneman et al. studied biased information passing among designers during negotiation in aerospace complex systems design processes. They found that subsystem designers would report conservative parameters or estimates with built-in margins to allow themselves room for design freedom when negotiating the design specifications with other subsystems [33]. Several biases have been shown in software engineering and design, including applications toward CAD systems and the internet of things (IoT) [16,3437].

The prior work presented above confirms that biases do exist in the design. The work presented in this paper hopes to bring a better understanding as to why these biases are so prevalent. If biases are being implemented from an adaptive rationality perspective, it can lead to a new understanding of the value of the bias to the designer themselves. It can better highlight if the bias is beneficial or a detriment to their process, and ultimately how to best address it. Therefore, the hope of this study is to provide a more thorough context to the biases of decision-making that are studied in design team settings.

2 Materials and Methods

The 36 participants of this study were all students in a graduate-level engineering design course. The class consisted of 40 students broken into ten design teams, four students per team. Only one team in the course did not fully consent to participate in this study, and they were excluded from all results and analyses. Participants consented to give the researchers access to all individual and team course deliverables.

This course met twice a week, 75 min per meeting, in the Spring 2020 semester of the institution. The team project was 50% toward each participant’s grades for the course, and extra credit was provided to participants who allowed their project work to be included in this research study. This was a 16-week course, where the first project assignment was due in week six and continued for the remaining ten weeks. During week ten of the semester, it was announced that classes would be moved online for the remainder of the semester due to COVID-19. This announcement came after customer needs had been obtained by the students, but before concept ideation and selection were submitted. This means that the selection, feedback, and iteration processes were submitted after courses were made fully remote. The syllabus for the course had five main learning objectives, summarized below:

  • Describe design methods, tools, and terminology.

  • Analyze and evaluate when and why particular design methods are or are not appropriate.

  • Apply multiple design methods and tools in individual and team settings.

  • Participate in team design activities.

  • Communicate design process choices and outcomes in written and oral formats.

A demographic survey showed that the participants consisted of 23 men and 13 women. The majority also classified themselves as White (20) or Asian (13). The 35 of the 36 participants were between the ages of 21 and 26, with one being older than 27. There were eight participants in a Ph.D. program, 23 in a Master’s program, and five senior-level undergraduate students. There were 30 participants in mechanical engineering, with eight participants adding additional disciplines, such as robotics, computer science, aerospace, biology, and chemistry-related fields. The remaining six participants were from aerospace, chemical, or electrical engineering programs and were not enrolled as mechanical engineering students.

The approach to the methodology for this study was to gather as much data within the designers’ natural working environment as possible. This was done by combining data from real project deliverables with reflection surveys that were integrated into the course. There were five main surveys given to students. The surveys were distributed after teams completed five critical outcomes across the semester: project selection, customer needs and target specifications, concept selection, design refinement, and the final design report. The course deliverables accessed by the researchers associated with each critical outcome can be found in Table 2, and a timeline of the deliverables by course week can be shown in Fig. 1. In Fig. 1, a dotted line separates the weeks where courses were taught in person and courses were moved remotely due to the pandemic. The final design report contained an economic analysis, which was not a standalone course deliverable before the final report.

Fig. 1
Timeline of course deliverables and surveys relative to course week
Fig. 1
Timeline of course deliverables and surveys relative to course week
Close modal
Table 2

Course deliverables relative to the surveys distributed to students

SurveyCorresponding course deliverables accessed
Survey 1Individual project topic ideation team project proposal
Survey 2Customer needs identified with ranking target specifications
Survey 3Individual concept ideation group concept ideation
concept selection and selection process
Survey 4Design concept feedback design iteration
Survey 5Final team project report
SurveyCorresponding course deliverables accessed
Survey 1Individual project topic ideation team project proposal
Survey 2Customer needs identified with ranking target specifications
Survey 3Individual concept ideation group concept ideation
concept selection and selection process
Survey 4Design concept feedback design iteration
Survey 5Final team project report

The surveys were a mix of Likert scale response questions, multiple choice, and open-ended text-entry questions. There were similar questions that occurred across all five surveys, briefly listed below. The purpose of these questions was to have some consistency across the project for comparison across phases.

Role in decision-making: Participants provided Likert scale responses to how they felt about the decision-making process during the design phase/tasks. This includes whether they advocated for their ideas/beliefs, felt satisfied with the decisions made, felt heard when voicing opinions, and felt invested in the decision-making process.

Effort: Participants documented the amount of effort (in hours) for the most recent design task. They selected from a list the number of hours of individual work, as well as hours spent working as a team. Lastly, they stated how heavily invested they felt in the specific design processes that occurred throughout the project.

Perception of the market: Participants provided Likert scale responses to how they felt about the market after major design tasks were completed. This includes the market size as well as how likely they are to view themselves as part of the market.

Design methods and personal duties: Participants chose from a list or wrote in the methods used to achieve their design tasks, as well as any duties they were assigned individually during that time.

Aside from common survey questions, there were other questions tailored specifically to the most recent course deliverables for the project. These unique survey questions are described below, and their contributions toward identifying biases will be discussed in the results section for each corresponding bias.

Survey 1: In addition to the similar survey questions listed above, participants were asked to list what they believed to be their top three ideas for the individual project topic ideation assignment, in ranked order from best idea to third best idea. For each idea, participants provided information, such as the amount of research performed and how they developed the idea (personal experience, identifying current solutions, etc.). Survey 1 also presented questions that had participants prepare initial predictions for which five customer needs would be relevant before the customer needs assessment. Similar text-entry questions were asked for a description of their first idea of what the solution would look like, as well as if they were aware of any current solutions on the market.

Survey 2: Survey 2 included unique questions concerning how many stakeholders were interacted with individually and as a team. Likert scale statements asked how students felt about the stakeholders and how the customer needs assessment impacted their view of the project. Similar to survey 1, participants ranked the three customer needs they believed to be most important to the project. For each of the top three needs, Likert scale statements asked participants questions, such as whether they were the person who identified this customer need, if they felt the need would be easy to ideate for, and whether they had the time and resources to meet the customer need. A similar ranking and description process was performed for their personally ranked top three target specifications.

Survey 3: For survey 3, participants were asked how they felt about the individual and team ideas generated. This included statements such as how involved the team was in the process, the quality of ideas, and the difficulty they encountered in generating ideas. Like survey 1, students were asked to write out what they believed to be the best three ideas of all the ideas generated, in ranked order from best idea to third best idea. They were asked various aspects for each idea, such as if they contributed to the idea, whether it matched their vision of the solution from the beginning of the semester, and how other team members felt about the idea. They were asked to choose their preferred idea based on the same factors presented to them in survey 1. They were also asked for their opinion on the final concept with which the team chose to move forward. Survey 3 ended with Likert scale statements asking participants how they felt about moving into user the feedback phase, followed by an open-ended description of any assumptions of shortcuts students believed their teams would need to take to receive virtual feedback. These questions were implemented specifically due to the semester being moved to fully on-line courses before the user feedback process due to the pandemic.

Survey 4: Survey 4 asked participants for their thoughts about the end users chosen for gathering user feedback, including whether they were the most available people or the best depiction of their market. They also provided opinions on the process used to gather feedback, such as the method for communicating the design and the severity of refinements needed based on feedback. Lastly, participants listed what they believed to be the three most important design decisions made, ranked most important to third most important. These should be specific decisions concerning whether to modify or not modify aspects of the design, and how the design was modified, based on the user feedback received. For each decision, they provided Likert scale agreement with statements such as if they agreed with the decisions, if it included design refinements, if they recommended the decisions, and if they were justifiable decisions.

Survey 5: Survey 5 asked participants to describe how they felt about the economic analysis performed by the team, as well as the final design. They were also asked a series of statements regarding what they would have done differently, such as being more vocal or spending more time on ideation. These statements were paired with Likert scale responses. Then, participants were asked to describe one major decision about the design or design process that they would change if they could do the project over again, and if they were influential in making this decision. Similar prompts asked participants to describe one major decision that they believed was critical to the design success, as well as any issues they encountered and if they should have seen these issues coming beforehand. Survey 5 ended with demographic questions. In addition to basic demographic questions, participants were asked to respond to a set of statements, listed below, as honestly as possible.

  • I am comfortable sketching my ideas.

  • I am a creative person.

  • If I have spent more time on an idea or project, I am more reluctant to abandon it.

  • If I am the owner of an idea, I am more inclined to want to pursue that idea on a design team.

  • If I have a hypothesis, I hope it will be confirmed by the data I collect.

  • Usually, the solution that exists to a design problem (the status quo) is a good one.

  • When I write interview or survey questions for user feedback, I am careful to consider positive or negative wording.

  • When I'm tired or stressed, I think I make different design decisions than I would make otherwise.

These questions supplement additional CATME demographic questions that were asked by the instructor at the beginning of the semester. Besides basic information such as age, sex, race, discipline, and year in their degree program, the following additional information was collected at the beginning of the semester:

  • “Big Picture”—Participants labeled themselves as a visionary, preferring ideas, preferring detail, or a more balanced approach to seeing the big picture.

  • “Leadership Role”—Participants labeled themselves as being a follower, preferring to follow, preferring to lead, or a more balanced approach.

  • “Leadership Preferences”—Participants labeled themselves as preferring a single leader, shared leadership, or one leader with input when defining the leadership in a team setting.

  • “Experience”—Participants labeled their level of comfort with being hands-on, ranging from no experience to expert level.

3 Results

3.1 Bandwagon Effect.

The bandwagon effect occurs when people support decisions without proof of their value, such as agreeing to popular opinions without voicing opposition or dissatisfaction. To identify the possibility of the bandwagon effect within design teams, we identified if participants “advocated” for their ideas and beliefs and paired this with their satisfaction with group decision-making and the design problem moving forward. The results are focused on participants who did not advocate for their beliefs but continued with the project as the team believed necessary. This provides the appearance that they are simply following along with the popular opinion of team decisions. Questions regarding advocating and satisfaction were implemented in all five surveys across the semester. Responses to Likert scale survey questions are shown in Figs. 2 and 3.

Fig. 2
Participant agreement toward advocating for their own beliefs across design phases
Fig. 2
Participant agreement toward advocating for their own beliefs across design phases
Close modal
Fig. 3
Student responses, across all teams, to being satisfied with decision-making across design phases
Fig. 3
Student responses, across all teams, to being satisfied with decision-making across design phases
Close modal

From Figs. 2 and 3, it is clear that the majority of students state that they advocated for their beliefs and were satisfied with the team’s results. We looked closer for those who did not agree that they advocated during decision-making with their team, but agreed that they were satisfied with the outcomes. For this analysis, we included participants who answered “disagree” or “neutral” as those who did not agree. For example, 11 participants (31%) listed “disagree” or “neutral” for whether they advocated for their own project topic ideas. However, all 11 participants stated they were satisfied with the final topic chosen. This is shown for additional project deliverables in Fig. 4.

Fig. 4
Participants that felt satisfied at each stage without advocating for their own beliefs
Fig. 4
Participants that felt satisfied at each stage without advocating for their own beliefs
Close modal

The stages in Fig. 4 are in chronological order for class deliverables, from project selection to the economic analysis. The number of participants that fall under suspicion for the bandwagon effect decreases as the semester progresses to concept ideation and selection, then increases again as the semester ends. This could be due to project or semester fatigue, or it could also provide us with how much students value each portion of the process. For example, participants may value the ideation and selection process the highest, and they are willing/susceptible to fall into the bandwagon effect at other times. Across the entire semester, there were 48 total survey responses listed as “neutral” or “disagree” for advocating for their ideas or beliefs. Only seven of these (15%) did not report being satisfied with decision-making. This means the majority of team members who are not speaking up, are also not reporting any evidence of displeasure.

Demographic data from the beginning of the semester were used to look for other explanations in the results. One significant finding was produced: the participant’s leadership role preference was significantly correlated with the average satisfaction produced. For this significance, an independent samples Kruskal–Wallis test was performed to compare nonparametric data across more than two groups. Then, a pairwise comparison was performed to account for comparisons across each group. There were three groups for this analysis based on participant responses to the CATME survey: those who prefer following, those who prefer a balanced approach, and those who prefer leading in teams. There was one participant in a fourth group (follower), but this category had only one participant and therefore could not be used for comparisons. For the satisfaction survey questions shown in Fig. 3, an average satisfaction was produced for each participant. The results showed that participants who prefer leading were significantly less satisfied on average than participants who prefer a balanced approach between leading and following (H(2) = 7.057, P = 0.029). This may show us that students who take a more balanced approach are more likely to fall in line with the majority opinion, and lean on their leadership skills when there is no consensus. However, this speculation would require more in-depth assessment than the data can provide and could be left to future work.

One additional finding from the demographic CATME data may show that other biases intersect one’s ability to avoid the bandwagon effect. Participants were asked: “If I am the owner of an idea, I am more inclined to want to pursue that idea on a design team.” As participants felt that they are more inclined to pursue their own ideas on a design team, they were statistically significantly more likely to advocate for their own beliefs during the concept selection process (Spearman’s ρ = 0.368, p = 0.029, n = 35). This shows that it is possible that those who have some awareness that they exhibit ownership bias may be less susceptible to fall into the bandwagon concept.

Additionally, these data from Fig. 4 are broken down by teams in Table 3. The results show that team 7 consistently was above the team average when it came to the number of participants who may have shown symptoms of a bandwagon effect. This type of data can allow one to catch signs of bandwagon effects early in the semester and design process. Reflections in the final survey may indicate this type of bandwagon behavior as well. When asked what decisions they would redo if they could do the project over again, one teammate specifically mentioned advocating as the main issue they would have changed. For example:

Table 3

Number of participants by team who did not advocate for beliefs, but were satisfied with decision-making

Problem selectionCustomer needsTarget specsConcept selectedConcept refinementsEcon analysis
Team 1000101
Team 2002010
Team 3000021
Team 5200000
Team 6111003
Team 7331114
Team 8110000
Team 9210002
Team 10200002
Totals11642413
Team average1.220.670.440.220.441.44
Problem selectionCustomer needsTarget specsConcept selectedConcept refinementsEcon analysis
Team 1000101
Team 2002010
Team 3000021
Team 5200000
Team 6111003
Team 7331114
Team 8110000
Team 9210002
Team 10200002
Totals11642413
Team average1.220.670.440.220.441.44

Teammate 7.1: “Our team made the decision to simply have the seat fold against a wall for simplicity. I did not try to impact the decision made for simplicity's sake, but I believe my team would have listened if I had tried to impact the decision. In hindsight, I might have advocated more for an adjustable seat angle.”

Future work may seek to eliminate the good subject effect, where participants may not want to express a lack of satisfaction to the professor/researchers and appear as a student causing dysfunction within the team [38]. In reference to previous bandwagon literature, some future adjustments may need to be put in place to solidify the bandwagon effects. For example, Barnfield suggests that the strongest bandwagon effects show a change in an individual’s opinion, based on the popular opinion [14]. Additionally, bandwagon effects often need strong vocal voices so that this popular opinion is clearly heard and seen as the majority opinion [15]. As shown in the reflection from participant 7.1, students may have intuition on whether their voice will influence decisions or not. Future surveys may need to direct attention to not just whether someone has advocated, but to ask individuals which opinions were most vocalized, if their own view of this opinion was modified, and if this opinion became a part of the final decision.

3.2 Availability Bias.

Availability bias occurs when judgements are made based on the most available information in one’s memory. For availability bias, there was a search for instances in which decisions or opinions were developed based on an ease of obtaining or recalling information. The first search was with respect to how participants viewed the market of their chosen problem. There are two reasons for this approach. First, it is possible that those who believed they are a part of the market would have a more positive perception of it than those who are not. Second, it would be interesting to see how the opinion of the market changed over time, as more information became available to them and incorporated into the project.

The initial survey of market perception showed a significant positive correlation between “There is a large market for this product” and “I am a part of the market for this product” (Spearman’s ρ = 0.411, p = 0.013, n = 36). This remained a significant correlation for all remaining surveys [survey 2: ρ = 0.349, p = 0.037, n = 36] [survey 3: ρ = 0.397, p = 0.016, n = 36] [survey 4: ρ = 0.394, p = 0.019, n = 35] [survey 5: ρ = 0.477, p = 0.003, n = 36]. This means that for the entire semester, participants who believed they were in the market had a higher impression of the market for their design idea than those who did not. The case for availability bias is that participants more strongly believed in the market being large if they were a part of it.

An additional strong, negative correlation was found between being in the market and believing the market was niche [survey 1: ρ = −0.542, p = 0.001, n = 36] [survey 2: ρ = −0.347, p = 0.038, n = 36] [survey 3: ρ = −0.468, p = 0.004, n = 36]. This means that participants who felt they were part of the market were less likely to believe they were inside a niche market. This statement is less true for the later part of the semester, as the correlation did not hold for the final two surveys. It is possible that with more information, the participants were willing to admit that the market was different than their prior beliefs.

Lastly, for market perception, it was investigated whether participants who felt like they were in the market were contributing toward gathering more information during the process. One significant, negative correlation was found between being heavily invested in the customer needs process and believing that they had purchased products in this market before (ρ = −0.343, p = 0.041, n = 36). This means that participants who believed they had purchased products in the market before were less likely to be as invested in the needs-gathering process. This implies that those who felt more familiar with the market and problem space were progressing under their own idea of the needs (readily available information in memory), rather than seeking out information that would deepen their understanding.

The results above show a consistent correlation between being a part of the market and believing the market to be large; however, the negative correlation between being a part of the market and feeling the market to be niche lost significance in the final two surveys. It is possible that an availability bias diminished across the semester as new information was received. However, future work should consider the impact of framing bias: a “large market” can be viewed as a positive framing of the market, and a “niche market” could be viewed as a negative framing of a market. Participants seem reluctant to let go of the positive frame of their market, although were willing to give some agreement to the negative framing. This shows the importance of framing survey questions in multiple ways during data collection.

3.3 Status Quo Bias.

Status quo bias reflects a tendency toward a default option if one is present. For this bias, the approach was to understand if participants believed they were moving past the status quo of the current market, and if this belief has any relationship with project satisfaction.

To look for status quo bias in the student design teams, participants responded with their level of agreement with the following statement: “This product would be a disruptive innovation if introduced to the market.” As shown in Fig. 5, belief in the product being innovative never surpassed the percent agreement received after the initial project selection (survey 1). It is possible that participants settle into a more status quo role as the semester progresses. This could be due to several factors, such as the difficulty of the semester, fatigue within the team and project, or realizing that initial goals would be much harder to meet with the time and resource constraints of a course. There is a steep drop from survey 3 to surveys 4–5, which may imply that the circumstances of the pandemic forced teams to reevaluate their expectations for the project.

Fig. 5
Likert responses for how students perceived the innovation of their product
Fig. 5
Likert responses for how students perceived the innovation of their product
Close modal

Across five surveys, there were 62 total responses that disagreed that the product was innovative. These data were cross referenced with the satisfaction survey question responses found in Fig. 3. Only five times did the participant disagree with both the product being innovative and being satisfied with the corresponding decision-making at that point in the project. In other words, when participants did not believe their product to be innovative, 92% of the time they were still satisfied with the process or decisions made by the team. This could be a case for status quo bias, as those who do not believe their product would be innovative are still satisfied with their results. While the bandwagon effect considers if one advocated for their beliefs, the status quo bias only considers satisfaction with a lack of innovation. Three of the five responses that were not satisfied were from Participant 2.3, and all three participants (five responses total) used survey 5 reflections to express displeasure with the project selected, stating this as the one thing they would have changed about their project. An example reflection statement from each of these three participants is shown below.

Participant 2.3: “I believe this product is niche at best and has potentially no market. The price point is insane. I would've changed the focus of the project entirely.”

Participant 8.3: “I probably would have chosen a design problem that I was more passionate about … However, my team chose our design problem because it was one that the majority agreed upon.”

Participant 10.1: “I frankly did not love our design problem… there were too many competing products already on the market… I would've spent more time to identify a more innovative and interesting problem to tackle. However, in these kinds of group settings, I've learned it's often not worth that level of time investment.”

Table 4 breaks down the data from Fig. 5 at the team level. This table shows the number of team members (out of four total) per team that agreed their product was innovative, at the time that the surveys were completed. From these data, we can see that team 1 has the lowest number of team members buy into the innovation of their product, averaging less than one person per survey. However, team 1 also did not produce a single “disagree” response when it came to satisfaction at each project deliverable, and only three responses were rated “neutral” satisfaction. Multiple team 1 members expressed how their product was not the most innovative in the survey 5 reflection data, shown after Table 4. It should be noted that Participant 1.4 did not complete survey 4. One participant from team 5, a team where its members largely agreed that their product would be a disruptive innovation, indicated that their team pushed the boundaries of what they could achieve within the course.

Table 4

Team members who agreed that the product would be a disruptive innovation

Survey 1Survey 2Survey 3Survey 4Survey 5Average
Team 1120000.6
Team 2323322.6
Team 3111111
Team 5434112.6
Team 6022111.2
Team 7211111.2
Team 8211011
Team 9311211.6
Team 10112111.2
Average1.891.561.671.111.00
Survey 1Survey 2Survey 3Survey 4Survey 5Average
Team 1120000.6
Team 2323322.6
Team 3111111
Team 5434112.6
Team 6022111.2
Team 7211111.2
Team 8211011
Team 9311211.6
Team 10112111.2
Average1.891.561.671.111.00

Note: The table highlights the teams with the highest (bold) and lowest (italic) average agreement.

Participant 1.1: “I would say that a more radical change to an umbrella would be worthwhile. I don't think we would have had enough time to do this though.”

Participant 1.2: “I would change the decision to focus on the phone mount. We made this decision because it was the first applicable idea, but we should have asked the customers what they would have preferred.”

Participant 5.4: “I would have started with a more feasible idea from the beginning. We wanted to go for a wild concept, then all of our customer feedback was that it was unfeasible.”

Some demographic personality data may also provide insight into status quo bias. Participants were asked their agreement with the following statement: “Usually, the solution that exists to a design problem (the status quo) is a good one.” There was a statistically significant trend between agreement with this statement and satisfaction with the concept selected. The more likely a person is to believe that status quo solutions are typically good ones, the more likely there were to have been satisfied with the concept chosen by the team to move forward (ρ = 0.378, p = 0.025, n = 35) and less likely to believe a better concept was left on the table (ρ = −0.360, p = 0.033, n = 35).

Future work should seek to dive more into the relationship between rating a specification as important and rating a specification as easy to achieve. The methodology should be reworked to understand exactly why belief in innovation decreased through the semester. It should also consider that a certain amount of status quo bias may be a good thing for course projects. As Hu and Shealy note, “status quo bias is a heuristic that persists because overcoming it demands more cognitive attention and resources [10].” There is a benefit to status quo bias to set realistic expectations for course-based projects. However, it is worth understanding how to push some limits in a design course setting.

3.4 Ownership Bias.

Ownership bias occurs when one places more value on their own ideas or beliefs compared to the ideas/beliefs of others. To examine ownership bias, one area investigated was how participants would take ownership of the customer needs generated. Ownership bias would suggest that participants placed a higher value on customer needs that they generated over needs discovered by other team members. Survey 1 asked participants to generate up to five customer needs that they believed would be important for design success (pre-CNA), and survey 2 asked participants to list what they believed to be the top three most important customer needs (post-CNA). Team submissions post-customer needs assessment were required to include the full set of customer needs to move forward, complemented by a ranking for each need (high, moderate, or low importance). These data were used in conjunction with survey 2 Likert scale responses concerning the participant’s market perception.

A strong positive correlation was found between the number of one’s own predicted needs (survey 1) ranked within the individual’s top three customer needs (survey 2), and the Likert scale survey responses for the following statements (survey 2): “I am a part of the potential market for this product (ρ = 0.395, p = 0.019, n = 35)” and “I have purchased products in this market (ρ = 0.401, p = 0.017, n = 35).” This means that participants who ranked more of their own predicted needs as one of the top three most important needs were significantly more likely to consider themselves a part of the market or to have purchased products in the market previously. This could be interpreted as participants taking ownership of the customer needs because they had a better grasp of the market before the project began. However, out of 21 participants who somewhat/strongly agreed that “I am a part of the potential market for this product”, 62% (13 participants) included at least one preconceived need that the team submissions did not include as high importance (eight participants had two of such needs). This implies that almost two-thirds of participants who considered themselves a part of their design project market placed a higher value on the needs they personally generated compared to the team collectively. This can be considered a case for ownership bias, as the participants are attributing increased value to needs they “own” because they believe they are a part of the market and have ownership over the customer needs. These results may be comparable to Zheng and Miller, where individuals took ownership of ideas that other team members felt had low goodness [31].

The second assessment of ownership bias came at the end of the semester reflecting on major decisions. For survey 5, participants were asked to describe one decision critical to the success of the design, as well as one decision that they would change if possible. They were also asked whether they were influential in those decisions or not. Survey responses were categorized based on how participants assigned credit to the decisions made. Two researchers coded 25% of the data, with a 100% agreement rate, with one researcher categorizing the remaining data set. Two example responses are followed by the categories for responses (Table 5) and the categorization results (Fig. 6). Two participants did not list an unsuccessful decision, so there are only 34 responses for this question.

Fig. 6
Assigning influence to critical decisions
Fig. 6
Assigning influence to critical decisions
Close modal
Table 5

Categories for assigning ownership to successful/unsuccessful decisions

CategoryDefinition
Team, self-emphasisThe participant assigns credit/blame to the team as a whole, but they also emphasize/specify what they uniquely contributed compared to other team members
Self and memberThe participant assigns themselves credit/blame for influencing the decision, as well as another specific team member
MemberThe participant only assigns credit/blame for influencing the decision on another team member
TeamThe participant only assigns credit/blame for influencing the decision on the team as a whole, and see themselves no more than equally influential as other team members
Self onlyParticipant assigns only themselves credit/blame for influencing the decision.
Does not sayThe participant does not make it clear who has influenced the decision
CategoryDefinition
Team, self-emphasisThe participant assigns credit/blame to the team as a whole, but they also emphasize/specify what they uniquely contributed compared to other team members
Self and memberThe participant assigns themselves credit/blame for influencing the decision, as well as another specific team member
MemberThe participant only assigns credit/blame for influencing the decision on another team member
TeamThe participant only assigns credit/blame for influencing the decision on the team as a whole, and see themselves no more than equally influential as other team members
Self onlyParticipant assigns only themselves credit/blame for influencing the decision.
Does not sayThe participant does not make it clear who has influenced the decision

Team decision: “The decision to switch from rollercoaster style seating to smart overhead bins. The design was considered infeasible and based on the remaining time would have been complicated to sufficiently realize. The decision was made as a group, although I feel that with additional time, the concept could have been properly developed.”

Self only decision: “I think the lead screw aspect was critical to the design success. In the feedback from experts, they were very interested in this as it prevents having someone to have to manually turn the pile and saves time, manpower, and increases safety. I thought of the idea and was very influential in adding it to the design.”

The figure shows that for decisions labeled successful, participants tend to assign themselves credit in some form. This was not the case for the decisions that participants would like to have modified—only seven responses mentioned themselves specifically as influential in some way, compared to 19 times in successful decisions. Almost four times as many decisions were credited as a general team decision when it was considered unsuccessful versus successful. These results could be considered ownership bias because participants are either assigning decisions they owned as more valuable than other important decisions, or they took more ownership of decisions than they should have, simply because they saw those decisions as valuable. Successful decisions saw more references to oneself rather than the team. The data are limited by the number of people who did not clearly provide direction toward who was responsible for the decision. Roughly one-third of the responses did not provide credit for their decisions.

3.5 Hindsight Bias.

Hindsight bias occurs when one believes an outcome to have been predictable, only after having seen that outcome unfold. To investigate hindsight bias, opinions at the end of the projects were compared to responses and opinions throughout the semester. Survey 5 included a set of statements (Table 6) for the following prompt: “Looking back on the decisions made during the semester, is there anything else you would have done differently?” Participants responded to these statements using a five-point Likert scale ranging from “strongly disagree” to “strongly agree.” Table 7 breaks down the responses to these statements by the team. This table highlights in bold italic where teams had more than two participants strongly or somewhat agree with the statements. There is an italic number where teams had exactly half of their participants agree with the corresponding statement, and a bold number when there was less than half agreement.

Table 6

Survey 5 reflection statements on decisions across the semester design project

QuestionStatement
Q1I would have preferred a different problem space
Q2I would have revised our method for generating customer needs
Q3I would have placed emphasis on different customer needs
Q4I would have given our team more realistically attainable design specifications
Q5I would have given myself more time to ideate
Q6I would have preferred a different concept for the final design
Q7I would have voiced my opinion more about critical decisions I disagreed with
Q8I would have made more modifications to the design after the user feedback
Q9I would have taken more risks to make the final design more innovative
QuestionStatement
Q1I would have preferred a different problem space
Q2I would have revised our method for generating customer needs
Q3I would have placed emphasis on different customer needs
Q4I would have given our team more realistically attainable design specifications
Q5I would have given myself more time to ideate
Q6I would have preferred a different concept for the final design
Q7I would have voiced my opinion more about critical decisions I disagreed with
Q8I would have made more modifications to the design after the user feedback
Q9I would have taken more risks to make the final design more innovative
Table 7

Participants per team that “strongly/somewhat agree” with each statement, with instances of majority team agreement shown in bold

Q1Q2Q3Q4Q5Q6Q7Q8Q9
Team 1211041033
Team 2121000100
Team 3100111132
Team 5003121222
Team 6110020010
Team 7020142033
Team 8111102001
Team 9110030021
Team 10101140033
Q1Q2Q3Q4Q5Q6Q7Q8Q9
Team 1211041033
Team 2121000100
Team 3100111132
Team 5003121222
Team 6110020010
Team 7020142033
Team 8111102001
Team 9110030021
Team 10101140033

From Table 7, we can see that Q5, Q8, and Q9 resonated the most across all teams. This includes giving themselves more time to ideate, more time to iterate after user feedback, and more risk taking to produce an innovative product. We can also see teams who may have had regrets across the entire project versus teams that felt satisfied with their process. For example, teams 2 and 8 never had more than half of their members agree with one of the statements, and only once did they have two of the four members agree. On the other hand, teams 1, 7, and 10 had over half of their members agree with three of the nine statements. Lastly, it can be seen that teams 5 and 7 had at least two team members agree with over half the statements (5 of 9).

As a final step of the hindsight bias analysis, these statements were compared to participant responses in prior surveys to understand if this was a true indication of hindsight bias. Correlations were found using a Spearman’s correlation on Likert scale data across all participants, not just a particular team or subset. The following correlations were found:

A negative correlation was found between statement Q3 and responses from survey 2. The more that participants agreed they would have placed emphasis on different customer needs, the less likely they were to have felt satisfied with the logical reasoning for their customer needs in the first place (ρ = −0.359, p = 0.032, n = 36).

  • Strong negative correlations were found between statement Q6 and responses from survey 3. Those who would have preferred a different final concept were less likely to have felt satisfied with the concept selected after ideation (ρ = −0.629, p = 0.001, n = 36), the reasoning for choosing that concept (ρ = −0.504, p = 0.002, n = 36), as well as the criteria for selection (ρ = −0.499, p = 0.002, n = 36).

  • Strong positive correlations were found between statement Q6 and responses from survey 3. The more that participants believed they would have preferred a different final concept, they were more likely to agree a better concept was left on the table (ρ = 0.651, p = 0.001, n = 36), and would have chosen a more challenging concept though it may have failed (ρ = 0.352, p = 0.035, n = 36).

  • Strong negative correlations were found between statement Q7 and responses from surveys 3 and 4. The more that participants agreed they should have voiced their opinions more, they were less likely to agree that they advocated for their beliefs toward the concept selection process (ρ = −0.540, p = 0.001, n = 36), the concept chosen (ρ = −0.472, p = 0.004, n = 36), and the revisions after user feedback (ρ = −0.379, p = 0.025, n = 35).

Looking at these correlations, it appears that participants recognized their displeasure for specifics, such as customer needs and design concepts, and the processes for achieving those outcomes, in real time. However, it appears that there is more hindsight bias toward recognizing that they should have advocated more for their ideas and beliefs across the semester. As stated in the bandwagon effect analysis, most participants who did not advocate for their ideas or beliefs still felt satisfied with the project outcomes associated with each survey. This may imply that either beliefs in the need to advocate were modified at the end of the semester, a truer sign of hindsight bias, or the satisfaction within the semester was influenced by the good subject effect—participants who did not want to be seen as the unhappy participant of the group [38]. These results do follow one definition of hindsight bias provided by Kerin, as there was “little or no evidence to predict” that participants would be inclined to wish they had advocated more [17].

4 Error Management Discussion

As discussed in the background section, adaptive rationality does not show cognitive biases as weaknesses or errors, but an efficient adaption for survival [1,2]. This is done through heuristics (saving time and resources in exchange for a potentially suboptimal outcome), error management (acting toward less costly error—false positives are less costly than false negatives), and experimental artifacts (preserving resources or livelihood in an unnatural or unusual environment).

These results of the bias assessment of this study have been framed in terms of error management—how participants may have perceived the costs of using these biases as far less than the costs of not adhering to them. An overview of this is shown in Table 8.

Table 8

Overview of biases with respect to error management

BiasBias des.False pos.False neg.Cost of FPCost of FN
Ownership biasBias toward the customer needs you were previously aware of as importantBelief that you are aware of the most important needsBelief that you are not aware of the most important needsLow: Some needs are met, but not all—design retains some valueModerate: Higher cognitive load to redirect toward new needs
Hindsight biasBias toward believing you should have advocated more during the semesterHindsight belief that you did not advocate enough during the projectHindsight belief that you did advocate enough during the projectLow: Perception is that you were not to blame for project flawsHigh: Perception the team overruled you, or that you influenced flawed decisions
Status quo biasBias toward satisfaction with design solutions without innovationSatisfaction with design project progressLack of satisfaction with design project progressLow: Project ends with course completionHigh: Time and energy diverted away from other semester tasks
Availability biasOverestimate market sizeMarket is perceived as larger than realityMarket is perceived as smaller than realityLow: Participant is not required to push product to real time marketHigh: Participant would exert considerable energy to refocus design space
Bandwagon effectBias toward satisfaction with the majority opinionNot advocating for your beliefs or ideasAdvocating for your beliefs or ideasLow: Continuing project with good morale and chemistryModerate: Potential backlash or conflict, more time committed to decision
BiasBias des.False pos.False neg.Cost of FPCost of FN
Ownership biasBias toward the customer needs you were previously aware of as importantBelief that you are aware of the most important needsBelief that you are not aware of the most important needsLow: Some needs are met, but not all—design retains some valueModerate: Higher cognitive load to redirect toward new needs
Hindsight biasBias toward believing you should have advocated more during the semesterHindsight belief that you did not advocate enough during the projectHindsight belief that you did advocate enough during the projectLow: Perception is that you were not to blame for project flawsHigh: Perception the team overruled you, or that you influenced flawed decisions
Status quo biasBias toward satisfaction with design solutions without innovationSatisfaction with design project progressLack of satisfaction with design project progressLow: Project ends with course completionHigh: Time and energy diverted away from other semester tasks
Availability biasOverestimate market sizeMarket is perceived as larger than realityMarket is perceived as smaller than realityLow: Participant is not required to push product to real time marketHigh: Participant would exert considerable energy to refocus design space
Bandwagon effectBias toward satisfaction with the majority opinionNot advocating for your beliefs or ideasAdvocating for your beliefs or ideasLow: Continuing project with good morale and chemistryModerate: Potential backlash or conflict, more time committed to decision

The table offers a new perspective of heuristic decision-making. Rather than the perspective of saving time and resources, it is framed as choosing the less costly error. For example, the availability bias in terms of heuristics allows participants to save time and resources by going with the most available information, even though it may not be the most optimal information. In terms of error management, participants may decide that the costs of having a misperception of the market are lower than the costs associated with the additional effort they would have to put into finding the true market size. There is less future work involved if you believe that you already know there’s a market and what that market needs. This may show that while participants who felt invested in the process were aware that the end users were the most available that they could find, but showed no significance with being unsatisfied with the results.

There are other examples for this as well. The bandwagon effect may show that the costs of being satisfied with the majority opinion are much less than going against the grain in a meeting or pushing the team to do more ideation or analysis. Those connected to the status quo bias may need to evaluate if the additional effort toward innovation is worth it when viewing their semester as a whole, their passion for the project, and their course grades. By pushing the envelope, participants had to make a decision about how much mental effort was worth the innovation they produced with their team. Those exhibiting ownership bias with customer needs may believe that they will help the team in the long run by holding onto needs that they believe to be of higher value to the market, rather than letting the team find out later in the semester that their needs are incorrectly ranked.

While this study focused on student design teams, there are some clear analogies to professional design practice which could be further addressed in future work. For example, bandwagon effect in professional design practice may be present if the team consists of varying levels of seniority, experience, or management. It may appear as if the opinions of some designers carry more weight than others. Status quo bias could appear through a company not wanting to disrupt its current successful strategies, or a designer concerned about yearly reviews (rather than a final course grade). A classroom instructor may be more willing to encourage thinking outside the box compared to some organizational cultures. Lastly, ownership biases could exist for those wanting recognition professionally. These are all instances where the bias may feel rational internally to the designer to prefer one type of cost over another.

5 Conclusion

This study shed light on common biases in student design teams and where/how they may be occurring. The study used real student design project data along with survey responses regarding the student experiences in their teams. After highlighting these biases, a discussion was presented of how each bias may fit into the lens of adaptive rationality, particularly in the forms of heuristics, error management, and experimental artifacts.

There are some clear limitations to this study. First, some course deliverables were not specifically submitted through course assignments that could be used to support or rethink our claims. For example, team interview/survey method questions for customer needs and user feedback were not submitted as part of the corresponding class assignment; therefore, only data such as the final set of customer needs and the user feedback process summary were submitted and available for this study. This prevented checks for potential framing bias or confirmation bias in those questions. In future studies, more unstructured data would be needed for additional converging evidence of these biases.

The COVID-19 pandemic also interrupted this semester of data collection, and the resulting redirection of projects, courses, and general lifestyle of each participant may have all been a factor in these results. For example, access to makerspaces may have become more limited, although physical prototypes were not mandated for the course. Processes such as obtaining user feedback or having group ideation sessions may have been more virtual than originally planned. Team dynamics may have been altered throughout the process as the mode of interaction with teammates was modified.

This is a case study, so the results cannot be generalized to other cases. However, it can be the basis for future inquiries and targeted design interventions for mitigating biases. For example, we can compare the results of a new graduate-level engineering design course with methods in place to avoid what we have seen in this design environment. Additionally, we can improve the methods used in this study to find the magnitude of the hypothesized biases.

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

References

1.
Haselton
,
M. G.
,
Bryant
,
G. A.
,
Wilke
,
A.
,
Frederick
,
D. A.
,
Galperin
,
A.
,
Frankenhuis
,
W. E.
, and
Moore
,
T.
,
2009
, “
Adaptive Rationality: An Evolutionary Perspective on Cognitive Bias
,”
Soc. Cogn.
,
27
(
5
), pp.
733
763
.
2.
Haselton
,
M. G.
,
Nettle
,
D.
, and
Murray
,
D. R.
,
2016
, “The Evolution of Cognitive Bias,”
Handbook of Evolutionary Psychology
,
D. M.
Buss
, ed.,
John Wiley & Sons, Inc.
,
Hoboken, NJ
.
3.
Fillingim
,
K. B.
,
Nwaeri
,
R. O.
,
Borja
,
F.
,
Fu
,
K.
, and
Paredis
,
C. J. J.
,
2019
, “
Design Heuristics: Extraction and Classification Methods With Jet Propulsion Laboratory’s Architecture Team
,”
ASME J. Mech. Des.
,
142
(
8
), p.
081101
.
4.
Fillingim
,
K. B.
,
Shapiro
,
H.
,
Fu
,
K.
, and
Paredis
,
C. J. J.
,
2019
, “
Process Heuristics: Extraction, Analysis, and Repository Considerations
,”
IEEE Syst. J.
,
14
(
4
), pp.
5148
5159
. doi:10.1109/JSYST.2019.2959538
5.
Toh
,
C. A.
,
Strohmetz
,
A. A.
, and
Miller
,
S. R.
,
2016
, “
The Effects of Gender and Idea Goodness on Ownership Bias in Engineering Design Education
,”
ASME J. Mech. Des.
,
138
(
10
), p.
101105
.
6.
Thaler
,
R. H.
,
1980
, “
Toward a Positive Theory of Consumer Choice
,”
J. Econ. Behav. Organ.
,
1
(
1
), pp.
39
60
.
7.
Kahneman
,
D.
,
Knetsch
,
J. L.
, and
Thaler
,
R. H.
,
1990
, “
Experimental Tests of the Endowment Effect and the Coase Theorem
,”
J. Polit. Econ.
,
98
(
6
), pp.
1325
1348
.
8.
Madrian
,
B. C.
, and
Shea
,
D. F.
,
2001
, “
The Power of Suggestion: Inertia in 401(K) Participation and Savings Behavior
,”
Q. J. Econ.
,
116
(
4
), pp.
1149
1187
.
9.
Samuelson
,
W.
, and
Zeckhauser
,
R.
,
1988
, “
Status Quo Bias in Decision Making
,”
J. Risk Uncertain.
,
1
(
1
), pp.
7
59
.
10.
Hu
,
M.
, and
Shealy
,
T.
,
2020
, “
Overcoming Status Quo Bias for Resilient Stormwater Infrastructure: Empirical Evidence in Neurocognition and Decision-Making
,”
J. Manage. Eng.
,
36
(
4
), p.
04020017
.
11.
Gilovich
,
T.
,
Griffin
,
D.
, and
Kahneman
,
D.
,
2002
,
Heuristics and Biases: The Psychology of Intuitive Judgment
,
Cambridge University Press
,
Cambridge, UK
.
12.
Hallihan
,
G. M.
,
Cheong
,
H.
, and
Shu
,
L. H.
,
2012
, “
Confirmation and Cognitive Bias in Design Cognition
,”
Presented at the ASME 2012 International Design Engineering Technical Conferences
,
Chicago, IL
,
Aug. 12–15
, pp.
913
924
.
13.
Rikkers
,
L. F.
,
2002
, “
The Bandwagon Effect
,”
J. Gastrointest. Surg.
,
6
(
6
), pp.
787
794
.
14.
Barnfield
,
M.
,
2020
, “
Think Twice Before Jumping on the Bandwagon: Clarifying Concepts in Research on the Bandwagon Effect
,”
Political Stud. Rev.
,
18
(
4
), pp.
553
574
.
15.
Gavious
,
A.
, and
Mizrahi
,
S.
,
2001
, “
A Continuous Time Model of the Bandwagon Effect in Collective Action
,”
Soc. Choice Welf.
,
18
(
1
), pp.
91
105
.
16.
Choi
,
S.-M.
,
Lee
,
H.
,
Han
,
Y.-S.
,
Man
,
K. L.
, and
Chong
,
W. K.
,
2015
, “
A Recommendation Model Using the Bandwagon Effect for E-Marketing Purposes in IoT
,”
Int. J. Distrib. Sens. Netw.
,
11
(
7
), p.
475163
.
17.
Kerin
,
T.
,
2018
, “
Accounting for Hindsight Bias: Improving Learning Through Interactive Case Studies
,”
Loss Prev. Bull.
,
264
, pp.
17
20
.
18.
Williams
,
C. B.
,
Gero
,
J.
,
Lee
,
Y.
, and
Paretti
,
M.
,
2011
, “
Exploring the Effect of Design Education on the Design Cognition of Mechanical Engineering Students
,”
ASME International Design Engineering Technical Conferences
,
Washington, DC
,
Aug. 28–31
, pp.
607
614
.
19.
Guaghran
,
W. F.
,
2002
, “
Cognitive Modelling for Engineers
,”
ASEE Annual Conference Proceedings
,
Montreal, QC, Canada
,
June 16 –19
,
pp. 7.297.1–7.297.13
.
20.
Cross
,
N.
,
2004
, “
Expertise in Design: An Overview
,”
Des. Stud.
,
25
(
5
), pp.
427
441
.
21.
Stanovich
,
K. E.
, and
West
,
R. F.
,
2008
, “
On the Relative Independence of Thinking Biases and Cognitive Ability
,”
J. Pers. Soc. Psychol.
,
94
(
4
), pp.
672
695
.
22.
Hallihan
,
G. M.
, and
Shu
,
L. H.
,
2013
, “
Considering Confirmation Bias in Design and Design Research
,”
J. Integr. Des. Process Sci.
,
17
(
4
), pp.
19
35
.
23.
Nelius
,
T.
,
Doellken
,
M.
,
Zimmerer
,
C.
, and
Matthiesen
,
S.
,
2020
, “
The Impact of Confirmation Bias on Reasoning and Visual Attention During Analysis in Engineering Design: An Eye Tracking Study
,”
Des. Stud.
,
71
, p.
100963
.
24.
Nelius
,
T.
, and
Matthiesen
,
S.
,
2019
, “
Experimental Evaluation of a Debiasing Method for Analysis in Engineering Design
,”
22nd International Conference on Engineering Design
,
Delft, The Netherlands
,
Aug. 5–8
, pp.
489
498
.
25.
Viswanathan
,
V.
, and
Linsey
,
J. S.
,
2013
, “
Role of Sunk Cost in Engineering Idea Generation: An Experimental Investigation
,”
ASME J. Mech. Des.
,
135
(
12
), p.
121002
.
26.
Zheng
,
X.
,
Ritter
,
S. C.
, and
Miller
,
S. R.
,
2018
, “
How Concept Selection Tools Impact the Development of Creative Ideas in Engineering Design Education
,”
ASME J. Mech. Des.
,
140
(
5
), p.
052002
.
27.
Yang
,
X. J.
,
Wickens
,
C. D.
, and
HöLttä-Otto
,
K.
,
2016
, “
How Users Adjust Trust in Automation: Contrast Effect and Hindsight Bias
,”
Human Factors and Ergonomics Society Annual Meeting
,
Los Angeles, CA
,
Sept. 19–23
, pp.
196
200
.
28.
Woods
,
D. D.
,
2003
, “
Creating Foresight: How Resilience Engineering Can Transform Nasa’s Approach to Risky Decision Making
,”
Work
,
4
(
2
), pp.
137
144
.
29.
Vermillion
,
S. D.
,
Malak
,
R. J.
,
Smallman
,
R.
, and
Linsey
,
J.
,
2015
, “
A Study on Outcome Framing and Risk Attitude in Engineering Decisions Under Uncertainty
,”
ASME J. Mech. Des.
,
137
(
8
), p.
084501
.
30.
Toh
,
C. A.
,
Strohmetz
,
A. A.
, and
Miller
,
S. R.
,
2016
, “
The Effects of Gender and Idea Goodness on Ownership Bias in Engineering Design Education
,”
ASME J. Mech. Des.
,
138
(
10
), p.
101105
.
31.
Zheng
,
X.
, and
Miller
,
S. R.
,
2019
, “
Is Ownership Bias Bad? The Influence of Idea Goodness and Creativity on Design Professionals Concept Selection Practices
,”
ASME J. Mech. Des.
,
141
(
2
), p.
021106
.
32.
Onarheim
,
B.
, and
Christensen
,
B. T.
,
2011
, “
Idea Screening in Engineering Design Using Employee-Driven Wisdom of the Crowds
,”
International Conference on Engineering Design
,
Copenhagen, Denmark
,
Aug. 15–18
.
33.
Austin-Breneman
,
J.
,
Yu
,
B. Y.
, and
Yang
,
M. C.
,
2016
, “
Biased Information Passing Between Subsystems Over Time in Complex System Design
,”
ASME J. Mech. Des.
,
138
(
1
), p.
011101
.
34.
Parsons
,
J.
, and
Saunders
,
C.
,
2004
, “
Cognitive Heuristics in Software Engineering Applying and Extending Anchoring and Adjustment to Artifact Reuse
,”
IEEE Trans. Softw. Eng.
,
30
(
12
), pp.
873
888
.
35.
Mohanani
,
R.
,
Salman
,
I.
,
Turhan
,
B.
,
Rodriguez
,
P.
, and
Ralph
,
P.
,
2020
, “
Cognitive Biases in Software Engineering: A Systematic Mapping Study
,”
IEEE Trans. Softw. Eng.
,
46
(
12
), pp.
1318
1339
.
36.
Stacey
,
M.
, and
Eckart
,
C.
,
1999
, “
CAD System Bias in Engineering Design
,”
International Conference on Engineering Design
,
Munich
,
Aug. 24–26
, Vol.
2
, pp.
923
928
.
37.
Jørgensen
,
M.
,
2013
, “
The Influence of Selection Bias on Effort Overruns in Software Development Projects
,”
Inf. Softw. Technol.
,
55
(
9
), pp.
1640
1650
.
38.
Nichols
,
A. L.
, and
Maner
,
J. K.
,
2008
, “
The Good-Subject Effect: Investigating Participant Demand Characteristics
,”
J. Gen. Psychol.
,
135
(
2
), pp.
151
165
.