Concept selection tools have been heavily integrated into engineering design education in an effort to reduce the risks and uncertainties of early-phase design ideas and aid students in the decision-making process. However, little research has examined the utility of these tools in promoting creative ideas or their impact on student team decision making throughout the conceptual design process. To fill this research gap, the current study was designed to compare the impact of two concept selection tools, the concept selection matrix (CSM) and the tool for assessing semantic creativity (TASC) on the average quality (AQL) and average novelty (ANV) of ideas selected by student teams at several decision points throughout an 8-week project. The results of the study showed that the AQL increased significantly in the detailed design stage, while the ANV did not change. However, this change in idea quality was not significantly impacted by the concept selection tool used, suggesting other factors may impact student decision making and the development of creative ideas. Finally, student teams were found to select ideas ranked highly in concept selection tools only when these ideas met their expectations, indicating that cognitive biases may be significantly impeding decision making.

Introduction

Concept selection has been shown to be the gatekeeper of the product design and development process due to its impact on the quality, cost, and desirability of the final product [1], as well as its impact on the development time and cost of later design stages [2]. During this process, concepts generated earlier in the design process are evaluated, selected, and synthesized into a final solution for further development in order to address the design goal [3,4]. In industry, selecting highly creative concepts increases the likelihood of radical innovation and product success [5], while selecting poor concepts can result in large expenses including redesign costs and production postponement. Because of this, creativity, or ideas that are both of high quality and novelty [69], has been considered an essential factor in product development [10] and training engineering students to be creative has become a necessary component of engineering education [11].

While creativity is an important factor throughout the engineering design process, it is typically only emphasized during idea generation in engineering design education [12]. This is problematic, because Rietzschel et al. [13] have pointed out that, “The advantages of having many creative ideas at one's disposal can be easily undone by a suboptimal selection process. Instead of simply making groups more productive, it may be more fruitful to make them more effective in all stages of the creative process.” Therefore, it is important to study what impacts the development of creative ideas after idea generation in order to better educate students in innovation practices. To date, only a few studies have investigated the evolution of creative ideas or the factors that lead to the promotion or filtering of these ideas after idea generation in engineering design education [1418]. For example, Starkey et al. identified a reduction in the creativity of student design ideas from the idea generation stage to the students' final conceptual design, regardless of the design task being explored [19]. This study, however, did not look at the underlying factors that led to the filtering of these creative ideas or the impact of the concept selection tool used.

The evaluation of the utility of concept selection tools in the engineering design process is important because these tools were developed as a means to aid decision makers in the fuzzy-front end of the design process—a stage of design that is rife with ambiguities and uncertainties [20]. Tools that have been routinely introduced in engineering design courses in order to provide a systematic structure to lead decision making [21] include Pugh Chart [22], quality function deployment [23], and the analytic hierarchy process (AHP) [24,25]. These traditional tools have been considered as efficient ways to aid novices (e.g., engineering design students) in identifying sensible concepts and potentially mitigating judgment errors due to their transparency and high repeatability [26,27]. However, they have also been criticized because of their transparency to incite confirmation bias [28] and their inability to measure global creativity (the creativity of an idea with respect to existing products and ideas on the market [29,30]), which may contribute to the lack of innovation in the final product [13,31]. In order to mitigate some deficits of these traditional concept selection tools and provide a more practical tool to measure the global creativity, Gosnell and Miller [32] developed the tool for assessing semantic creativity (TASC) to help a group of novices achieve expert level creativity ratings based on single adjective selection and semantic similarity.

No matter what decision tool is used in the design process, these concept selection instruments can provide only directions or suggestions on the designs to select for further development [33]. In other words, these tools do not make the decision for decision makers. Instead, an imperfect human decision maker makes this decision. Even if a concept selection tool is optimized to identify the most creative (high quality and novelty) ideas within a set, the tool used to inform this decision must be trusted by the individual using this tool. Therefore, when investigating the impact of concept selection tools on the flow of creative ideas, it is important to not only explore the effectiveness of a decision tool for providing recommendations on the selection of creative ideas, but also to identify the concept selection decisions made by human decision makers align with these recommendations.

While there has been practice of applying concept selection tools in engineering education settings, the impact of these tools on students' selection of creative ideas as part of the engineering design process has yet to be investigated. For the purpose of filling this research gap, the current study was developed to understand how the utilization of concept selection tools impact the creativity of ideas selected through an empirical investigation with 60 engineering students in an introduction to engineering design course. Specifically, the students in this study were monitored for their team informal screening, selection, and final conceptual design during an 8-week grade-dependent course project that emphasized creative idea development. Half of the students in the study utilized a traditional concept selection tool (the concept selection matrix (CSM)) while the other half utilized the newly developed TASC method. The results of this study are used to identify the impact of these tools on the creativity of ideas selected by student teams and provide recommendations to educators in the adoption of concept selection tools in classrooms and to researchers on modifying existing concept selection tools or developing new tools.

Related Work

In order to discover the impact of concept selection tools on student team design outcomes, previous research was explored. This section summarizes this prior work and provides support for the current investigation.

Creativity in the Design Process.

Creativity has been regarded as an essential factor to consider during the product design process in both engineering design education and industry. In order to boost the creativity of ideas generated, researchers have developed ideation tools to encourage students to generate a large amount of ideas (see, for example, Refs. [3437]). Even though the effectiveness of idea generation can be improved using these tools, a reduction in creativity throughout the design process and the abandonment of novel ideas have been observed in engineering design class projects regardless of the design task being explored [19]. In other words, merely generating creative ideas is not enough to promote the development of creative concepts in an educational context [13]. However, it is unclear what factors may lead to students' abandonment of creativity.

While not specifically studied in the context of creativity throughout the design process, previous research on students' concept selection processes has shown that students often value technical feasibility during the concept selection process [18,38] and select feasible and desirable ideas at the cost of originality [39]. In addition, concept selection has been shown to be largely subject to individual attributes [40], risk taking attitudes [16,41,42], and various cognitive biases and heuristics, such as design fixation [4345], ownership bias [17,46], and the bias against creativity [47]. Specifically, individuals have been said to have an inherent bias against creative ideas due to the risk and uncertainties of creative ideas [39,48], and their judgment of originality has been found to be negatively related to judgments of appropriateness [49]. It has also been argued that students tend to be less creative in class projects when there is a risk of receiving poor grades [50], since engineering students believe getting good grades is more important than engaging with the learning process [38]. Even though previous research has shown that explicitly instructing students to select creative ideas can improve students' concept selection effectiveness in selecting novel ideas, students in this prior work were not satisfied with the team informal discussion of idea selection and perceived selected ideas with low effectiveness [39].

The Concept Selection Matrix Benefits and Limitations.

In order to eliminate the effect the factors mentioned earlier and mitigate the risks and uncertainties associated with the fuzzy front end of the design process [51], many formal concept selection tools have been created to provide a framework to guide concept selection (see, for example, Refs. [2225]). One concept selection tool that has been heavily integrated throughout engineering education is the CSM [52] due to its transparent decision process and high repeatability [21]. Specifically, CSM was created to aid novices in comparing alternatives through a combination of the criteria developed in the AHP [25,53,54] and through the subjective ratings of candidate ideas [52]. As an example, Drake introduced the method into undergraduate and postgraduate student projects and found it helpful in providing insights of students' reasoning [55]. On one hand, the CSM method helps enhance group consensus and commitment to the decision [56] by structuring a framework to systematically direct the pattern and content of discussion by encouraging team members to analyze the problem as a collaborative unit [57,58]. On the contrary, utilizing the CSM in a group setting has been shown to expose designers to team level biases that have been shown to impact decision making [59], such as conformity to majority opinions [60], the halo effect [61], and social loafing [62]. In addition, the CSM method requires users to setup a problem hierarchy, determine paired comparisons, establish the priorities, and calculate aggregated scores from relative priorities and weights of criteria [25]. Accordingly, the complexities of the CSM method make various implementations problematic in the increasingly fast paced industry [63,64]. The CSM method is also limited by inherent shortcomings of the AHP, such as the restriction to solve hierarchical structured problems [65], rank reversal when adding an irrelevant alternative result [66,67] or an indifferent criterion [68], and the convergence of alternatives [33].

While research on these methods has focused on the benefits and deficits of the effectiveness of their decision-making powers, little research has explored the impact of these concept selection tools on student teams' decision-making process or outcomes. In other words, few researchers have tracked whether students use the recommendations provided by the formal concept selection tools in class projects or how this affects the creativity of the ideas developed throughout this process. Without this information, we do not know how, if at all, concept selection tools play a role in students' abandonment of creativity throughout the conceptual design process. The current study seeks to fill this research void and provide a preliminary understanding of the impact of formal concept selection tools on the quality and novelty of ideas selected by student teams' at several decision points.

The Tool for Assessing Semantic Creativity: A New Method of Concept Evaluation in Engineering Education.

In order to overcome the limitations of existing concept selection tools, particularly with reference to ratings of idea creativity, the TASC was developed [15,32]. This method uses natural language processing and semantic similarity to quantify product originality and feasibility by taking advantage of both subjective opinions and computational power [15,32]. Specifically, the TASC method provides a website2 to process creativity evaluations which requires each team to upload their candidate ideas (no limitation in the number compared), and each member in the team to select 3–5 adjectives that best describe each idea individually (see Fig. 1 for example) [32]. Once these ratings have been completed, novelty ratings for each idea are calculated by adding the novelty weights for each of the words chosen by each participant for each design idea, where the weights are determined using Wordnet:: Similarity to analyze the semantic similarities between the adjective words selected and the word “innovative.” Similarly, quality ratings for each idea are calculated by adding the quality weights for each of the words chosen by each participant for each design idea based on the semantic similarities between the adjective words selected and the word “feasible.” Importantly, the weights of each of the words selected to rate each idea for novelty and quality are blind to the participants.

Fig. 1
Screenshot of the TASC website that asking participants to choose 3–5 adjectives from a word bank
Fig. 1
Screenshot of the TASC website that asking participants to choose 3–5 adjectives from a word bank
Close modal

The TASC method helps teams make decisions through crowdsourcing or by aggregating individual decisions together without intervention from other team members [69]. In this way, the impact of the group size and group interaction in collective judgments can be prevented, and the time and money required to find, meet, and train skilled raters can be reduced [70]. This type of aggregation is also in line with Amabile's [71] definition of creativity which identifies a product (or idea) as creative to the extent that appropriate observers independently agree it is creative. Prior research has pointed out that the TASC method provides input that allows designers to consider creativity more thoughtfully [15], so that creative ideas can be considered longer in the design process. Moreover, it has been illustrated that aggregate TASC ratings of 11 novice designers can be used to mimic expert ratings [15].

While both traditional concept selection tools like the CSM and new tools like the TASC method can provide design decision-making directions for designers, there are significant differences in their purpose and the underlying approach for informing human decision-making during the design process. For example, the TASC method utilizes semantic creativity ratings leading to evaluations of idea creativity on a global scale as an effort to promote the selection of creative ideas [32]. Both methods, though, can serve as concept selection tools to provide recommendations during the concept selection process and aid designers, especially novices to make decisions.

Even though practical designers prefer intuitive methods better than formal decision-making tools [7275], these formal tools like the CSM method are often introduced in engineering design classrooms. The transparency of the CSM method's rating process [55] may help students learn and develop trust on the tool, but it may also allow student decision makers to easily manipulate the criteria, weights, and the evaluation results to get the answer they want (confirmation bias [76,77]). This is problematic because researchers have recommended that concept selection tools should be used only as a “decision consultant” and not as a means for deriving the final answer [33]. In addition, this confirmation bias can cause a fixation on initial ideas and block decision from identifying other, perhaps better, design alternatives [72]. Therefore, it is very important to investigate and compare the impact of the CSM method and newly developed tools such as TASC on the decision students make after the concept selection tool ranks an idea set, in order to provide recommendations on how to incorporate concept selection tools in engineering design curricula.

In order to fill this research gap, the current study was developed to compare and contrast the impacts of the CSM method and the TASC method on the quality and novelty of ideas selected by student teams at several decision points throughout the conceptual design process in an engineering design course.

Research Questions

The previous work brought to light that the decision making process in engineering design education on which ideas to keep and which to be abandoned can be impacted by both the concept selection tool utilized as well as the human making the decision. However, the influence of concept selection tools on student team decision-making and design outcomes, particularly as they relate to the creativity of the ideas, is still unclear. Therefore, the current study was developed to answer the following questions:

  1. (1)

    How does the average creativity (quality and novelty) of student design teams' ideas change from team informal screening to the final conceptual design? We hypothesized that the average quality (AQL) of the student design team's ideas would increase and that the novelty of the student design team's ideas would decrease during the conceptual design process, since students are more likely to select feasible and desirable ideas [18,78] at the cost of originality [39].

  2. (2)

    What impact does the concept selection tool have on the evolution of the creativity (quality and novelty) of a student team's design ideas? We hypothesized that student teams who used the CSM would be more likely to see increases in the feasibility of their design ideas throughout the process over the TASC method since the CSM method specifically assesses the quality of the ideas based on design requirements and technical feasibility [25]. We also hypothesized that the use of the CSM method would have no impact on the novelty of ideas throughout the design process, since the CSM method does not typically include the novelty of ideas as a criterion [25]. On the contrary, we hypothesized that student teams who used the TASC would see an increase in the novelty of their design outcomes throughout the design process, since the TASC method evaluates the creativity of the ideas through measuring both the novelty and quality of ideas [32].

  3. (3)

    Do student teams select ideas based on the recommendations of the concept selection tools? We hypothesized that students would be more likely to select ideas recommended by the CSM method over the TASC method due to the transparency of the CSM method in its decision making process [1] and the fact that it has been largely integrated in engineering design education building a larger sense of trust [21]. Alternatively, the TASC method is new and unfamiliar to students [32], and the weights of each of the words selected to rate each idea for novelty and quality is blind to the participants, which may lead students to feel the method is less trustworthy due to a lack of transparency in the decision criterion.

Methodology

In order to answer these research questions, a study was conducted with two sections of a first-year undergraduate engineering design course with student teams working on the same, graded, 8-week design project. Each section of the course was randomly assigned at the start of the project to one of two conditions: the CSM and the TASC method. Details regarding the methodology used in this study are found in the remainder of this section.

Participants.

The participants in this study were undergraduate students in a first-year engineering design course taught at a large northeastern university. In all, 60 students (19 females and 41 males) from two sections of the course taught by the same instructor participated in the study, with 30 students in each section. In each section, students formed a total of 8 teams (6 four-member and 2 three-member teams) based on student proficiencies in three-dimensional modeling, sketching, and engineering design. First-year engineering design students were selected because they received little information about the concept selection tools prior to this course so that CSM was not rooted in students' mind.

Procedure.

The design study presented here was a part of a graded 8-week design project conducted in a first-year engineering design course (see Fig. 2 for the timetable of the 8-week project). At the start of the project, students were given the following design problem based on a fictional location:

Fig. 2
The timetable of the project and the in-class design practices
Fig. 2
The timetable of the project and the in-class design practices
Close modal

“Pittsadelphia is looking for the design of a cost-effective freight shipping system that reduces smog and meets United States Environmental Protection Agency (EPA) requirements, while maintaining or increasing freight capacity into and out of this important port city.”

Suggestions, such as upgrading the locomotive fleet or adopting alternate freight shipping methods, were given at the start of the project (see the Appendix for complete task description). After receiving the problem description, the students performed preliminary research on available transportation options during the first 2 weeks of the project. They used this research to develop design criteria and conduct an AHP analysis to determine the weights of those criteria. Following preliminary data gathering and as part of the current study, students participated in a 20 min individual idea generation session following the rules of brainstorming [79] where they were asked to individually sketch out as many ideas as possible for the freight shipping system and write notes on the sketches to help others understand the concept's features, such as the transportation and the route utilized, see Table 1 for sample ideas and the final conceptual design generated by a student team. Importantly, creativity was emphasized during the individual idea generation session and throughout the course project.

Table 1

Sample ideas generated by team 6 in TASC section from the consider category (top) and the final conceptual design (bottom)

Upon completion of the individual idea generation session, each student team was given 20 min to screen the ideas generated by all team members. During this process, the students were allowed to alter or combine their generated ideas or create new ideas freely. Each team was then instructed to categorize the ideas into two piles: “consider” for ideas that had any useful elements for further development either in whole or in part and “do not consider” for designs that the team no longer wanted to consider as part of the process. The team informal screening process was supposed to simulate the fast screening process in the actual design process in industry in order to reduce the number of ideas that would enter the concept selection tools.

Next, students evaluated ideas from the consider pile using the CSM or TASC method depending on their sections conditional assignment. Specifically, students in the CSM section of the course were given a 10 min instruction on why and how to use the CSM method. After that, the students were asked to go through the ideas in their consider pile and rank those ideas as a team using their previously developed AHP weights and a CSM template in microsoftexcel. In order to use the CSM, students were asked to follow the process set forth by Ulrich and Eppinger [52]. Specifically, the students were asked to rate the candidate ideas based on the criteria previously developed using the AHP using a five-point scale, where one indicated the idea failed to meet the criterion while five indicated the idea successfully met the criterion. Through synthesizing the criteria weights and the corresponding ratings, an overall score was obtained for each candidate idea. While, students in the TASC section of the course were given a 10 min introduction on why and how to use the TASC method and the TASC website.2 Then, teams used the TASC website to upload their ideas and rate and rank them individually. See Fig. 3 for an example result using the TASC method. Importantly, the weights of the adjective words in the word bank were blind to students and the calculation of the quality, novelty, and overall creativity scores happened in the back end; therefore, students were not able to change the weights of the adjective words or manipulate the final recommendations. In addition, students in the TASC section were also taught how to calculate customer needs weights using the AHP method as part of the curriculum design. Students in the TASC section, however, did not use or apply these weights during the concept selection process to inform their decision making during the project studied.

Fig. 3
Example of the idea ranked first by team 5 from the TASC section with a creativity score of 4.166
Fig. 3
Example of the idea ranked first by team 5 from the TASC section with a creativity score of 4.166
Close modal

Once the evaluation of the candidate ideas was complete, the teams in both sections of the course were given 10 min to discuss the evaluation results and complete a team survey that included three questions: (1) What idea(s) does your team think should be further developed after today's activity; (2) What factors did your team consider when selecting these ideas and; (3) Are the ideas that your team chose to further develop ranked highly in the formal concept selection tool? If not, why did you select these concepts? Specifically, students were instructed that these tools should be used only as a “decision consultant” and not as a means for deriving the final answer [33]. Each student was then asked to individually fill out a survey that consisted of eight Likert scale statements about their experience using their respective concept selection tool (CSM or TASC) and 12 Likert scale statements about their personal preferences on characteristics of candidate ideas; these results were reported in a prior conference proceeding [78]. Finally, at the end of the project, approximately four weeks later, each team was asked to write a report, including a detailed description of their final conceptual design.

Coding Methods and Metrics.

In order to quantify the novelty and quality of the ideas at different stages in the conceptual design process, the Shahet al. novelty metrics [9] and Linsey's method [80] to evaluate idea quality were used. Specifically, the average novelty (ANV) and AQL of ideas were calculated at each stage of the concept selection process: team informal screening, team informal discussion, and final conceptual design, see Fig. 4. As a reminder, at the team informal screening stage, all of the ideas that were filtered into the consider pile were evaluated while at the team informal screening stage, all of the ideas that student teams selected for further development as the result of team informal discussion were evaluated. Finally, at the final conceptual design stage, all of the final conceptual designs that were either combined or revised from existing ideas were evaluated. Importantly, the novelty and quality ratings were conducted by two raters: one who had a Ph.D. in an engineering design related field and more than 4 years of experience and one who had completed graduate course work in engineering design and had a minimum of two publications in the field of engineering design creativity. The details of the rating process are described in detail in the remainder of this section.

Fig. 4
Ideas that were averaged for the novelty and quality ateach stage (Note: rankings in concept selection tools were not necessarily the same with rankings in team informal discussion)
Fig. 4
Ideas that were averaged for the novelty and quality ateach stage (Note: rankings in concept selection tools were not necessarily the same with rankings in team informal discussion)
Close modal

Novelty.

The novelty of the ideas was defined as how unusual or unexpected an idea is compared to other ideas [9] and was calculated using the Shah et al. novelty metric [9]. Specifically, the two raters used a design rating survey to assess the novelty and of each idea, see the website link3 for full design rating survey question list. This survey helped raters classify features of each design idea addressed, similar to the approach used in previous studies [16,41]. The inter-rater reliability (percent of agreement) between the two raters reached 0.88. Once the ratings were complete, the novelty of the ideas was calculated.

In order to calculate the novelty of the ideas and final designs, the feature novelty was first computed. Feature novelty, fi, measured the novelty of each feature i, was calculated as the frequency of the feature compared to all other features addressed by all the ideas and final designs. Feature novelty ranged from 0 to 1, with one indicating an extremely novel feature compared to other features and was computed as follows:
fi=TCiT
(3)

where T is the total number of the ideas and final designs and Ciisthe total number of the ideas and final designs that contain feature i.

Once feature novelty was computed, the overall novelty of the idea (idea novelty) was then calculated using the following computation:
Nj,p=1nfin
(4)

where fi is the feature novelty of feature iandn is the number of features in total of idea j.

Next, in order to compare the ANV of a team's idea set at each of the design stages (team informal screening, concept selection tool, and team informal discussion), the following calculation was used:
ANVp=1mNj,pm
(5)

where Nj is the novelty of team p's jth idea and m is the total number of ideas in team p.

Quality.

Quality was defined as a measure of a concept's feasibility and how well it meets the design specifications [9] and was measured on an anchored multipoint scale based on Linsey's quality measurement [80]. This scale was selected because it has been used by researchers in prior studies (see Refs. [19,41,42], and [80], for examples), and because previous work has suggested that the inter-rater reliability is improved when each point on the quality scale has a defined meaning [80]. The four criteria that were used to evaluate the quality of the ideas were derived from the problem description including the cost, capacity, efficiency, and the emissions of the newly proposed system compared to the current shipping system. Each criterion was rated using a three-point scale (“worse,” “no change,” and “better”). If the return time of the investment (including both fuel and infrastructure) was stated to be less than two years (required return time from the project statement), the new shipping system was rated “better” and coded as a 1. If the return time of the investment was more than two years, the new shipping system was rated worse and coded as a −1. If the return time of the investment was exactly two years, the new shipping system was rated “no change” and coded as a 0. The same rating and coding strategies were applied on the capacity, efficiency, and emissions of the new shipping system. Importantly, the inter-rater reliability (percent of agreement) for this method was 0.90. Any disagreements in the rating process were settled in a small conference between the two raters. Once the ratings were complete, the following calculation was used to quantify idea quality:
Qj,p=qcost,j,p+qemission,j,p+qcapacity,j,p+qefficiency,j,p4
(1)

where qcost,j,p is the quality of idea j depending on the cost, qemission,j,p is the quality of idea j depending on the emission, qcapacity,j,p is the quality of idea j depending on the capacity, and qefficiency,j,p is the quality of idea j depending on the efficiency.

Next, in order to capture the AQL of the ideas generated by each team in the different design stages (team informal screening, concept selection tool, and team informal discussion), the following calculation was used:
AQLp=1mQj,pm
(2)

where Qj,p is the quality of team p's jth idea and m is the total number of ideas in team p.

Results

During the study, a total of 259 ideas were generated, in which 97 ideas were categorized into consider (mean = 6.06 ideas/team, SD = 1.73) by the student teams with a mean AQL of −0.14 (SD = 0.30) and a mean ANV of 0.68 (SD = 0.14). As a reminder, idea quality ranged from −1 to 1 where positive scores meant that the idea was better than existing solutions, and negative scores meant that it was not as good as existing solutions. Idea novelty, differently, ranged from 0 to 1, where higher scores indicated higher levels of novelty. The remainder of this section presents of our analyses with respect to our research questions.

How Does the Average Creativity (Quality and Novelty) of Student Design Teams' Ideas Change From Team Informal Screening to the Final Conceptual Design

Our first research question was developed to identify how, or to what effect, the quality and novelty of a student team's idea set changed throughout the course of their design project. We hypothesized that the AQL of student design teams' ideas would increase, while the ANV of design teams' ideas would decrease since students are more likely to select feasible and desirable ideas at the cost of originality [19,39]. In order to address this research question, the quality and novelty of each student team's ideas were compared at three design stages: team informal screening, team informal discussion of idea selection, and final conceptual design, see Fig. 4. Thus, the dependent variables in this research question were the ANV and quality (AQL) of a team's idea set, while the independent variable was the stage of the design process. Prior to conducting our analysis on the data, assumptions for the repeated measure ANOVA were tested. Specifically, Mauchly's test of sphericity indicated that the assumption of sphericity was violated for AQL (χ2(2) = 6.46, p = 0.04) and ANV (χ2(2) = 6.61, p = 0.04); therefore, a Greenhouse–Geisser correction was used in this research question.

The corrected, repeated measures ANOVA revealed that there was no significant difference between the ANV of a team's ideas throughout the design process (F(1.43, 20.02) = 3.16, p = 0.08, ηp2 = 0.18), see Fig. 5. However, there was a significant difference between the AQL of the team's ideas during the three stages of the design process (F(1.437, 20.120) = 30.68, p < 0.01, ηp2 = 0.69). Specifically, post-hoc tests using the Bonferroni correction revealed that the AQL of a team's final conceptual design (mean = 0.33, SD = 0.25) was significantly higher than both the AQL of their ideas during team informal screening (mean = −0.12, SD = 0.15, p < 0.01, ηp2 = 0.76) and their ideas during team concept selection (mean = −0.07, SD = 0.20, p < 0.01, ηp2 = 0.76), see Fig. 6. These results indicate that the AQL significantly increased during the detailed design stage. There were no other significant differences.

Fig. 5
The evolution of the ANV throughout the design process (data over the bars show the means of the variables; bars represent standard errors)
Fig. 5
The evolution of the ANV throughout the design process (data over the bars show the means of the variables; bars represent standard errors)
Close modal
Fig. 6
The evolution of the AQL throughout the design process (data over the bars show the means of the variables; bars represent standard errors)
Fig. 6
The evolution of the AQL throughout the design process (data over the bars show the means of the variables; bars represent standard errors)
Close modal

These results met part of our hypothesis that the AQL would increase over the design process, but contradicted part of our hypothesis that the ANV would decrease over the design process. The increase of the AQL may be a result of students' concentration on the technical feasibility of the system [18,38] and the enriched details in the final conceptual design compared to the early stage concepts. The lack of increase in the average novelty of design ideas throughout the process may be due to the fact that novelty is rarely stressed in the later phases of design, especially during the concept selection process.

What Impact Does the Concept Selection Tool (Concept Selection Matrix or Tool for Assessing Semantic Creativity) Have on the Evolution of the Quality and Novelty of a Student Team's Ideas

The first research question showed that the AQL of a student team's ideas changed significantly over the course of their design project, but there was no significant change in the ANV of their ideas. This question, though, did not compare the impact of different concept selection tools on these changes. Therefore, our second research question was developed in order to identify if, or to what effect, the concept selection tool used during the design process impacted the AQL and ANV of a student team's ideas and final conceptual design. Specifically, we hypothesized that student teams who used the CSM method would be more likely to see more increase in the AQL of their ideas, while student teams who used the TASC method would be more likely to see more increase in the ANV of their ideas throughout the conceptual design process.

In order to test this question, a repeated measures ANOVA was computed with the dependent variables being the ANV and quality (AQL) of a team's idea set and the independent variables being the stage of the design process and the concept selection tool used. The between-groups effect of the concept selection tools from the repeated measures ANOVA revealed no significant difference in the ANV (F(1, 14) = 3.10, p = 0.10, ηp2 = 0.18) or AQL (F(1, 14) = 3.28, p = 0.09, ηp2 = 0.19) of a team's ideas throughout the design process between the TASC and CSM section, see Figs. 7 and 8. These results reject our hypotheses by finding that there was no difference in the changes in the average novelty and quality of the design ideas developed throughout the design process based on the concept selection tool used. In other words, this finding indicates that the CSM and TASC method did not impact the decisions made by student teams differently, even though there are fundamental differences in the way these decision tools rank ideas.

Fig. 7
Comparing ANV throughout the design process in TASC and CSM section (data over the bars show means of variables; bars represent for standard errors)
Fig. 7
Comparing ANV throughout the design process in TASC and CSM section (data over the bars show means of variables; bars represent for standard errors)
Close modal
Fig. 8
Comparing AQL throughout the design process in TASC and CSM section (data over the bars show means of variables; bars represent for standard errors)
Fig. 8
Comparing AQL throughout the design process in TASC and CSM section (data over the bars show means of variables; bars represent for standard errors)
Close modal

Do Student Teams Select Ideas Based on the Recommendations of Concept Selection Tools

While the second research question showed no difference in the ANV or AQL of ideas selected between the CSM and TASC section throughout the design process, it was unclear if students actually selected the ideas highly recommended by the tools. Thus, the third research question was developed to identify if student teams' selection of ideas aligned with concept selection tools' recommendations. This is important because if a creative idea is ranked highly by the concept selection tool, but not selected by the team, the utility of this tool is minimal. Based on prior research, we hypothesized that students would be more likely to select ideas ranked highly in CSM, because they trusted the CSM method more than the TASC method. Therefore, in this research question, principles of directed content analysis [81] were used to analyze student teams' responses to postsurvey questions. It is important to note that only seven teams from the CSM section were in the analysis of the first question and seven teams from the TASC section in the analysis of the second question due to illegible responses.

The first question on the postsurvey asked student teams which idea(s) their team thought should be further developed after using the concept selection activity and if these ideas were ranked highly in the concept selection tool. The results revealed that 5 out of 8 (62.5%) teams from the TASC section and 6 out of 7 (85.7%) teams from the CSM section selected the ideas ranked first in the corresponding concept selection tools, respectively, see Fig. 9 for a full exploration of the ideas selected by each team. Importantly, all of the student teams in the CSM section selected ideas ranked in the top 3. On the contrary, student teams in the TASC section selected ideas ranked anywhere from first to fifth. This suggested that student teams were less likely to use the recommendations of TASC than those of CSM. However, student teams' likelihood of selecting ideas in both sections decreased as the ranking decreased, indicating that students were more likely to select ideas ranked highly in these tools.

Fig. 9
Percentages of student teams that selected ideas with different rankings by concept selection tools
Fig. 9
Percentages of student teams that selected ideas with different rankings by concept selection tools
Close modal

When asked if the ideas their team chose to further develop were ranked highly in the formal concept selection tool and if not, why their team selected them, student teams from the CSM section mentioned they would choose the ideas ranked first by the CSM method, because the ideas met their self-created criteria. For instance, one of the teams from the CSM section wrote, “The ideas we chose further to develop did rank highly in the formal concept selection method. This is important as it agrees with the factors we decided” [emphasis added]. Similarly, student teams who chose the ideas ranked first in the TASC method wrote that “…we chose them because they met our ideal designs and needs” and “The formal concept selection agreed with our reasoning as to which concepts were the most effective when implemented in our situation.” While, student teams who did not select ideas ranked first by the TASC method stated they did not agree with some; “… the ideas that were highly ranked according to the TASC website were not as viable as the idea we ended up choosing.” In addition, 6 out of 8 teams (75%) from the CSM section and 6 out of 7 (85.7%) teams from the TASC section emphasized different aspects of customer needs, such as reducing smog, EPA requirements, and cost. Additionally, 3 out of 8 (37.5%) teams from the CSM section and 4 out of 7 (57.1%) teams from the TASC section mentioned technical feasibility. Interestingly, none of the teams from the CSM section mentioned “creativity,” while 3 out of 7 (42.9%) teams from the TASC section emphasized that they specifically considered creativity during their concept selection.

The results of this research question confirmed our hypothesis that students would be more likely to select ideas ranked highly in CSM than those in TASC. One potential reason for this difference may be that the transparency of the ratings in the CSM method allowed student teams to manipulate the relative priorities of the criteria and subjectively rate the ideas so that the evaluation results were more consistent with their expectations [25]. On the contrary, the weights of the adjective words in the TASC method were blind to students [32].

Discussion

These results lead us back to the purpose of this study, which was to understand the impact of traditional, and newly developed concept selection tools on the evolution of creative ideas and the decision-making processes of engineering students. Specifically, the main findings were

  • The AQL of the ideas was significantly improved from the team informal screening stage to the final conceptual design stage. However, there was no change in the ANV of the ideas.

  • There were no significant differences between the impact of CSM and TASC methods on the ANV or quality (AQL) of student team ideas throughout the design process.

  • Student teams were more likely to select ideas ranked highly in the CSM method over the newly developed TASC method.

  • Student teams were more likely to select ideas after using concept selection tools if these ideas matched their preconceived expectations.

Specifically, our results showed that the AQL of student teams' ideas increased from team informal screening to the final conceptual design, but there was no change in ANV. This reflects the fact that students often emphasize customer needs and technical feasibility throughout the design process [18,78], which is a construct of the quality measurement. It shows an overall improvement in the quality of ideas as a student team tackles a design challenge, which is preferred due to the significant influence of idea quality on later design stages [5]. However, there was no significant change in the ANV of the student teams' ideas. This indicates student teams did not give as much value later on in the design process to attaining new ways of solving the problem. Based on previous analysis on individual survey, it was concluded that this failure of improving the novelty of ideas was not due to the likelihood to get a good grade, ownership of the ideas, easiness to prototype, or instructor's preferences [78]. Instead, this might be due to students' preference of customer needs, design criteria, and technical feasibility over the novelty of ideas and the recommendations of concept selection tools [78]. This also aligns with previous research that found students' subjective ratings of novelty and feasibility were inversely related [49], which means that the move toward more feasible design alternatives in the later stages of the design process may come at the cost of design originality. The current study examined the impact of formal concept selection tools in the conceptual design process as a complement to a previous study that tracked the changes of best novelty and quality of ideas generated and selected by students during a class project without the aid of formal concept selection tools [19]. It was found that even though concept selection tools were not a reason of students' abandoning creativity, but the benefits of concept selection tools were not fully taken by the students, either. This finding contributes in identifying factors that lead to decrease of idea novelty throughout the conceptual design process and would require a shift in the way how the later stages of the design process are taught, which is often geared toward evaluating, selecting, and synthesizing the original ideas into a final solution for further development [3,4]. It also urges design educators to integrate training that not only focuses on developing feasible design alternatives, but also focuses on continuing to develop novel alternatives throughout the design process [13].

The second finding showed that there were no significant differences between the impact of CSM and TASC methods on the ANV or quality (AQL) of student team ideas throughout the design process, despite the fundamental differences between these tools. This is surprising since the CSM method was designed to considering customer needs [25] and not necessarily to evaluate the novelty of a solution [9,29,30,82]. Differently, the TASC method utilizes semantic creativity ratings that lead to evaluations of idea creativity on a global scale [32]. While these tools are fundamentally different, the lack of differences in the impact of these tools on the average novelty and quality of ideas throughout the design process may be due in part to the fact that student teams were less likely to use the recommendations from the TASC method compared to the CSM method, as reported in the third research finding. In other words, even if the TASC method was recommending ideas that were more novel and more feasible than the CSM method, students would not necessarily take these recommendations, which would minimize the impact of these kinds of decision tools. A potential reason that student teams in the CSM section were more likely to select ideas ranked highly by CSM method may be attributed to the fact that they had used this approach in a previous project. On the contrary, students in the TASC section had never utilized this approach before and the method was neither familiar nor transparent to them, which may have impacted their likelihood of selecting ideas ranking highly by this approach. Future work is needed to explore exactly why students are biased for or against these approaches.

In addition, the results showed that student teams were more likely to select ideas ranked highly in the concept selection tools when these ideas matched their expectations of which ideas would meet their predefined criteria the best. In other words, regardless of the recommendations the decision tool provided, students in the current study were likely to exhibit confirmation bias during the concept selection process where they only looked for evidence (i.e., ratings) that supported their beliefs. This type of bias can cause a fixation on initial ideas and block students from identifying other, perhaps better, design alternatives [72]. This confirmation bias may also have been a reason that students were more likely to select alternatives ranked highly in the CSM method as the CSM method has a highly transparent rating process [55], which allows students to easily manipulate the criteria, weights, and the evaluation results to get the answer they want. The issue that student teams would only select ideas ranked highly in concept selection tools when these ideas met their expectations indicated that students might not fully understand the benefits of using formal concept selection tools. This means that the judgments students make on which ideas to consider and thus put into the concept selection tool can bias the decision making very early on in the design process. While this finding points educators to be more thoughtful in the teaching and introduction of concept selection tools and informal decision making processes into the engineering design classroom, more work is needed to both identify the impact of decision biases in this process as well as to develop tools and methods to enhance the flow of creative ideas in this process.

Conclusions and Future Work

The current study was developed to understand how the utilization of concept selection tools impacted the selection of creative ideas during the conceptual design process in engineering education through an experimental evaluation of 60 first-year students. The results of this study indicated that the AQL of student teams' design ideas increased significantly over the course of an 8-week design process, while there was no change in the ANV of their ideas, and that the evolution of the AQL and ANV of ideas in the conceptual design process was not significantly impacted by the concept selection tool used. In addition, students showed a confirmation bias that they would select ideas recommended by concept selection tools only when highly ranked ideas met their expectations. These findings indicate that there are other factors impacting student decision making and the development of creative ideas during this process. These results call for the need for engineering design educators to emphasize on the importance of developing not only feasible design alternatives in the later stages of design, but also stressing the importance of original solutions in an effort to drive creative idea development. In addition, the phenomenon that student teams would only select ideas ranked highly in concept selection tools when these ideas met their expectations indicated that students indicates that they may not fully understand the benefits of using formal concept selection tools. This calls for changes in engineering design courses in order to emphasize on not only how to use the design tools, but also why to use these tools. It also requires more research into the cognitive biases associated with student team decision-making during the concept selection process, and the modification or development of design tools that mitigate these biases in an effort to promote the flow of creative ideas.

While this study provides insights into the use of concept selection tools in engineering design education, some limitations still exist. First, the design problem used in the current study was a transportation or systems design problem which may have influenced the creativity of ideas developed [19]. Since the study was embedded in an engineering design course, the design of the study was limited by the curriculum design, course requirements, and the time frame of the design process (8 weeks). Because the informal screening of concepts happens early in the design process, it is possible that student teams may have abandoned their most novel ideas immediately in the design process, as has been found in previous studies [19]. In this way, the current study, along with this prior work, suggests a need to explore ways of teaching for supporting novel idea development in engineering education in order to better maintain creative potential during the design process.

Furthermore, student teams' selections might be skewed due to their bias toward their previous experience with the CSM or TASC methods or the transparency of the ranking systems. In addition, only two fundamentally different concept selection tools were used in the current study. Future work should expand this work to include explorations of a wider variety of decision aids in different design scenarios including a control group in which students do not use any formal concept selection tools. That being said, the current work strongly points to the need for further investigations into the use of formal and informal decision processes in engineering design education and their ultimate impact on idea development. On top of that, the current study does not provide exact answers to why students do not use the recommendations of the concept selection tools or how should concept selection process be taught in engineering design classrooms. Further exploration is needed to reveal the answers to these questions through both behavioral and attitudinal studies. Importantly, the current study conducted in engineering design classrooms also serves as the first step of investigating the impact of concept selection tools. Future studies are needed to identify if these same challenges exist in engineering design industry. These studies are of particular interest because they would allow for the identification of the long-term impact of concept selection tools on engineered solutions where the time frame of the design process does not have to be limited to an educational semester.

Acknowledgment

We would like to thank our undergraduate research assistant Lisa Miele, and our participants for their help in this project.

Funding Data

  • National Science Foundation, Division of Civil, Mechanical and Manufacturing Innovation (Grant No. 1351493).

Appendix: Design Problem Statement

Project Objective.

Pittsadelphia is looking for the design of a cost-effective freight shipping system that reduces smog and meets EPA requirements, while maintaining or increasing freight capacity into and out of this important port city.

Project Background.

Every day into and out of the port city of Pittsadelphia, approximately 165,000 tons of freight or minerals (coal, etc.) per day travel via rail. Smog from locomotive emissions is a key complaint of city residents. Smog is generated from engine-emitted NOx. Tier 2 locomotives used to haul freight are approaching age for overhaul, at which time investments will be required to meet EPA tier 3 (or higher) requirements.

Suggestions have been made to address locomotive emissions (i.e., smog) by

  1. (1)

    Upgrade the locomotive fleet to meet more recent emissions guidelines set by the EPA. A few options may exist to meet the new guidelines:

    • Sell existing fleet and purchase new locomotives

    • Upgrade fleet with exhaust after-treatment hardware

    • Utilize alternate fuels (Biodiesel, CNG, LNG, etc.), which may produce less NOx

  2. (2)

    Alternate freight shipping methods:

    • By water

    • By air

    • By ground, i.e., trucking

Sponsor Background.

GE Transportation, a unit of GE (NYSE: GE), solves the world's toughest transportation challenges. GE Transportation builds equipment that moves the rail, mining, and marine industries. GE's fuel-efficient and lower-emissions freight and passenger locomotives; diesel engines for rail; marine and stationary power applications; signaling and software solutions; drive systems for mining trucks; and value-added services help customers grow. GE Transportation is headquartered in Chicago, IL, and employs approximately 13,000 employees worldwide.

Project Description.

Each design team should research and evaluate the suggestions made for fleet upgrade or alternate shipping methods. For upgrades, consider physical constraints of new hardware, as well as fuel storage requirements. Provide your recommendations, commenting on impact to:

  1. (1)

    Emissions/regulatory requirements

  2. (2)

    Costs: fuel, infrastructure, etc.

  3. (3)

    Freight throughput/capacity

  4. (4)

    Public opinion

  5. (5)

    On-time delivery

Project Deliverables.

Note: Your instructor will clarify her or his expectations for these deliverables and respective due dates.

  • Technical report containing the following elements

  • Rationale for the recommendation

  • Description of alternative concepts and their evaluation

  • Systems diagram

  • Concept of operations

  • Environmental analysis

  • Assessment of important aspects of your system for feasibility and adoption, including public opinion

  • Economic viability of the system o CAD drawings

  • Model or prototype of a component of the overall system

References

1.
Mattson
,
C. A.
, and
Messac
,
A.
,
2005
, “
Pareto Frontier Based Concept Selection Under Uncertainty, With Visualization
,”
Optim. Eng.
,
6
(
1
), pp.
85
115
.
2.
Pahl
,
G.
,
Beitz
,
W.
,
Feldhusen
,
J.
, and
Grote
,
K.
,
2007
,
Engineering Design: A Systematic Approach
, 3rd ed.,
Springer Science+Business Media Deutschland GmbH
,
Berlin
, p.
632
.
3.
Nikander
,
J. B.
,
Liikkanen
,
L. A.
, and
Laakso
,
M.
,
2014
, “
The Preference Effect in Design Concept Evaluation
,”
Des. Stud.
,
35
(
5
), pp.
473
499
.
4.
Ulrich
,
K. T.
,
Eppinger
,
S. D.
, and
Goyal
,
A.
,
2011
,
Product Design and Development
,
McGraw-Hill
,
New York
.
5.
Huang
,
H.-Z.
,
Liu
,
Y.
,
Li
,
Y.
,
Xue
,
L.
, and
Wang
,
Z.
,
2013
, “
New Evaluation Methods for Conceptual Design Selection Using Computational Intelligence Techniques
,”
J. Mech. Sci. Technol.
,
27
(
3
), pp.
733
746
.
6.
Amabile
,
T. M.
,
1983
, “
The Social Psychology of Creativity: A Componential Conceptualization
,”
J. Pers. Soc. Psychol.
,
45
(
2
), pp.
357
376
.
7.
Brown
,
R. T.
,
1989
, “
Creativity
,”
Handbook of Creativity
,
Springer
, Boston, MA, pp.
3
32
.
8.
Mayer
,
R. E.
,
1999
, “
Fifty Years of Creativity Research
,”
Handbook of Creativity
, The Press Syndicate of the University of Cambridge, Cambridge, UK, p.
449
.
9.
Shah
,
J. J.
,
Vargas-Hernandez
,
N.
, and
Smith
,
S. M.
,
2003
, “
Metrics for Measuring Ideation Effectiveness
,”
Des. Stud.
,
24
(
2
), pp.
111
134
.
10.
Goel
,
P. S.
, and
Singh
,
N.
,
1998
, “
Creativity and Innovation in Durable Product Development
,”
Comput. Ind. Eng.
,
35
(
1–2
), pp.
5
8
.
11.
Dym
,
C. L.
,
Agogino
,
A. M.
,
Eris
,
O.
,
Frey
,
D. D.
, and
Leifer
,
L. J.
,
2005
, “
Engineering Design Thinking, Teaching, and Learning
,”
J. Eng. Educ.
,
94
(
1
), pp.
103
120
.
12.
Daly
,
S. R.
,
Mosyjowski
,
E. A.
, and
Seifert
,
C. M.
,
2014
, “
Teaching Creativity in Engineering Courses
,”
J. Eng. Educ.
,
103
(
3
), pp.
417
449
.
13.
Rietzschel
,
E. F.
,
Nijstad
,
B. A.
, and
Stroebe
,
W.
,
2006
, “
Productivity is Not Enough: A Comparison of Interactive and Nominal Groups in Idea Generation and Selection
,”
J. Exp. Soc. Psychol.
,
42
(
2
), pp.
244
251
.
14.
Starkey
,
E.
,
Gosnell
,
C. A.
, and
Miller
,
S. R.
,
2015
, “Implementing Creativity Evaluation Tools Into the Concept Selection Process in Engineering Education,”
ASME
Paper No. DETC2015-47396.
15.
Gosnell
,
C. A.
, and
Miller
,
S. R.
,
2016
, “
But is it Creative? Delineating the Impact of Expertise and Concept Ratings on Creative Concept Selection
,”
ASME J. Mech. Des.
,
138
(
2
), p.
021101
.
16.
Toh
,
C.
, and
Miller
,
S.
,
2014
, “The Role of Individual Risk Attitudes on the Selection of Creative Concepts in Engineering Design,”
ASME
Paper No. DETC2014-35106.
17.
Toh
,
C.
,
Patel
,
A.
,
Strohmetz
,
A.
, and
Miller
,
S.
,
2015
, “My Idea is Best! Ownership Bias and Its Influence in Engineering Concept Selection,”
ASME
Paper No. DETC2015-46478.
18.
Toh
,
C. A.
, and
Miller
,
S. R.
,
2015
, “
How Engineering Teams Select Design Concepts: A View Through the Lens of Creativity
,”
Des. Stud.
,
38
, pp.
111
138
.
19.
Starkey
,
E.
,
Toh
,
C. A.
, and
Miller
,
S. R.
,
2016
, “
Abandoning Creativity: The Evolution of Creative Ideas in Engineering Design Course Projects
,”
Des. Stud.
,
47
, pp.
47
72
.
20.
Sarbacker
,
S. D.
, and
Ishii
,
K.
,
1997
, “
A Framework for Evaluating Risk in Innovative Product Development
,”
Design Engineering Technical Conferences, Sacramento
, CA, Sept. 14–17, pp.
1
10
.
21.
Ogot
,
M.
, and
Okudan-Kremer
,
G. E.
,
2006
,
Engineering Design: A Practical Guide
,
Trafford Publishing
,
Bloomington, IN
.
22.
Pugh
,
S.
,
1991
,
Total Design: Integrated Methods for Successful Product Engineering
,
Addison-Wesley
,
Workingham, UK
.
23.
Akao
,
Y.
,
1994
, “
Development History of Quality Function Deployment
,”
The Customer Driven Approach to Quality Planning and Deployment
,
Asian Productivity Organization
,
Minato, Japan
, p.
339
.
24.
Saaty
,
T. L.
,
1994
, “
How to Make a Decision: The Analytic Hierarchy Process
,”
Interfaces
,
24
(
6
), pp.
19
43
.
25.
Marsh
,
E. R.
,
Slocum
,
A. H.
, and
Otto
,
K. N.
,
1993
,
Hierarchical Decision Making in Machine Design
,
MIT Precision Engineering Research Center
, Cambridge, MA.
26.
Racheva
,
Z.
,
Daneva
,
M.
, and
Buglione
,
L.
,
2008
, “
Supporting the Dynamic Reprioritization of Requirements in Agile Development of Software Products
,”
Second International Workshop on Software Product Management
(
IWSPM'08
), Barcelona, Spain, Sept. 9, pp.
49
58
.
27.
Hurst
,
K.
,
1999
,
Engineering Design Principles
,
Butterworth-Heinemann
, New York.
28.
Nickerson
,
R. S.
,
1998
, “
Confirmation Bias: A Ubiquitous Phenomenon in Many Guises
,”
Rev. Gen. Psychol.
,
2
(
2
), p.
175
.
29.
Martin
,
M. W.
,
2006
, “
Moral Creativity in Science and Engineering
,”
Sci. Eng. Ethics
,
12
(
3
), pp.
421
433
.
30.
Fischer
,
G.
,
2013
, “
Learning, Social Creativity, and Cultures of Participation
,”
Learning and Collective Creativity: Activity-Theoretical and Sociocultural Studies
, Routledge, New York, p.
198
.
31.
Harms
,
R.
, and
Van der Zee
,
K.
,
2013
, “
Interview: Paul Paulus Group Creativity
,”
Creativity Innovation Manage.
,
22
(
1
), pp.
96
99
.
32.
Gosnell
,
C. A.
, and
Miller
,
S. R.
,
2014
, “A Novel Method for Providing Global Assessments of Design Concepts Using Single-Word Adjectives and Semantic Similarity,”
ASME
Paper No. DETC2014-35380.
33.
Triantaphyllou
,
E.
, and
Mann
,
S. H.
,
1995
, “
Using the Analytic Hierarchy Process for Decision Making in Engineering Applications: Some Challenges
,”
Int. J. Ind. Eng.: Appl. Pract.
,
2
(
1
), pp.
35
44
.https://www.researchgate.net/publication/241416054_Using_the_analytic_hierarchy_process_for_decision_making_in_engineering_applications_Some_challenges
34.
Parnes
,
S. J.
, and
Meadow
,
A.
,
1959
, “
Effects of ‘Brainstorming’ Instructions on Creative Problem Solving by Trained and Untrained Subjects
,”
J. Educ. Psychol.
,
50
(
4
), p.
171
.
35.
Dennis
,
A. R.
, and
Valacich
,
J. S.
,
1993
, “
Computer Brainstorms: More Heads are Better Than One
,”
J. Appl. Psychol.
,
78
(
4
), p.
531
.
36.
Osborn
,
A.
,
1957
,
Applied Imagination
,
Scribner
,
New York
.
37.
Shah
,
J.
,
1993
, “Method 5-1-4 G-a Variation on Method 635,” MAE 540 Class Notes, Arizona State University, Tempe, AZ.
38.
Kazerounian
,
K.
, and
Foley
,
S.
,
2007
, “
Barriers to Creativity in Engineering Education: A Study of Instructors and Students Perceptions
,”
ASME J. Mech. Des.
,
129
(
7
), pp.
761
768
.
39.
Rietzschel
,
E.
,
Nijstad
,
B. A.
, and
Stroebe
,
W.
,
2010
, “
The Selection of Creative Ideas After Individual Idea Generation: Choosing Between Creativity and Impact
,”
Br. J. Psychol.
,
101
(
1
), pp.
47
68
.
40.
Wilde
,
D. J.
,
1997
, “
Using Student Preferences to Guide Design Team Composition
,”
Design Engineering Technical Conferences
(
DETC
), Sacramento, CA, Sept. 14–17, pp.
1
6
.http://web.engr.oregonstate.edu/~paasch/classes/me382/3890.PDF
41.
Toh
,
C. A.
, and
Miller
,
S. R.
,
2016
, “
Creativity in Design Teams: The Influence of Personality Traits and Risk Attitudes on Creative Concept Selection
,”
Res. Eng. Des.
,
27
(
1
), pp.
73
89
.
42.
Toh
,
C. A.
, and
Miller
,
S. R.
,
2016
, “
Choosing Creativity: The Role of Individual Risk and Ambiguity Aversion on Creative Concept Selection in Engineering Design
,”
Res. Eng. Des.
,
27
(
3
), pp.
195
219
.
43.
Jansson
,
D.
, and
Smith
,
S.
,
1991
, “
Design Fixation
,”
Des. Stud.
,
12
(
1
), pp.
3
11
.
44.
Toh
,
C. A.
,
Miller
,
S. R.
, and
Kremer
,
G. E.
,
2013
, “
The Role of Personality and Team-Based Product Dissection on Fixation Effects
,”
Adv. Eng. Educ.
,
3
(
4
), pp.
1
23
.https://files.eric.ed.gov/fulltext/EJ1076107.pdf
45.
Toh
,
C. A.
,
Miller
,
S. R.
, and
Kremer
,
G. E.
,
2012
, “
Mitigating Design Fixation Effects in Engineering Design Through Product Dissection Activities
,”
Design Computing and Cognition, College Station
, TX, June 7–9, pp. 95–113.
46.
Toh
,
C.
,
Strohmetz
,
A.
, and
Miller
,
S.
,
2016
, “
The Effects of Gender and Idea Goodness on Ownership Bias in Engineering Design Education
,”
ASME J. Mech. Des.
,
138
(10), p. 101105.
47.
Mueller
,
J. S.
,
Melwani
,
S.
, and
Goncalo
,
J. A.
,
2012
, “
The Bias Against Creativity: Why People Desire but Reject Creative Ideas
,”
Psychol. Sci.
,
23
(
1
), pp.
13
17
.
48.
Rubenson, D. L., and Runco, M. A., 1995, “
The Psychoeconomic View of Creative Work in Groups and Organizations
,”
Creativity Innovation Manag.
,
4
(4), pp. 232–241.
49.
Runco
,
M. A.
, and
Charles
,
R. E.
,
1993
, “
Judgements of Originality and Appropriateness as Predictors of Creativity
,”
Pers. Individ. Differ.
,
15
(
5
), pp.
537
546
.
50.
Linnerud
,
B.
, and
Mocko
,
G.
,
2013
, Factors That Effect Motivation and Performance on Innovative Design Projects,”
ASME
Paper No. DETC2013-12758.
51.
Saaty
,
T. L.
,
1990
, “
How to Make a Decision: The Analytic Hierarchy Process
,”
Eur. J. Oper. Res.
,
48
(
1
), pp.
9
26
.
52.
Ulrich
,
K. T.
, and
Eppinger
,
S.
,
2007
,
Product Design and Development
,
McGraw-Hill
,
New York
.
53.
Saaty
,
T. L.
,
1980
,
The Analytic Hierarchy Process
,
McGraw-Hill
,
New York
.
54.
Ogot
,
M.
, and
Okudan-Kremer
,
G.
,
2006
,
Engineering Design: A Practical Guide
,
Trafford Publishing
, St. Victoria, BC, Canada.
56.
Lai
,
V. S.
,
Wong
,
B. K.
, and
Cheung
,
W.
,
2002
, “
Group Decision Making in a Multiple Criteria Environment: A Case Using the AHP in Software Selection
,”
Eur. J. Oper. Res.
,
137
(
1
), pp.
134
144
.
57.
Korpela
,
J.
, and
Tuominen
,
M.
,
1997
, “
Group Decision Support for Analysing Logistics Development Projects
,”
30th Hawaii International Conference on System Sciences,
Wailea, HI, Jan. 7–11, pp.
493
502
.
58.
Dyer
,
R. F.
, and
Forman
,
E. H.
,
1992
, “
Group Decision Support With the Analytic Hierarchy Process
,”
Decis. Support Syst.
,
8
(
2
), pp.
99
124
.
59.
Tam
,
M. C.
, and
Tummala
,
V. R.
,
2001
, “
An Application of the AHP in Vendor Selection of a Telecommunications System
,”
Omega
,
29
(
2
), pp.
171
182
.
60.
Levine
,
J. M.
,
1989
, “
Reaction to Opinion Deviance in Small Groups
,”
Psychology of Group Influence
, 2nd ed., Psychology Press, New York, pp.
187
231
.
61.
Naquin
,
C. E.
, and
Tynan
,
R. O.
,
2003
, “
The Team Halo Effect: Why Teams are Not Blamed for Their Failures
,”
J. Appl. Psychol.
,
88
(
2
), p.
332
.
62.
Larey
,
T. S.
, and
Paulus
,
P. B.
,
1999
, “
Group Preference and Convergent Tendencies in Small Groups: A Content Analysis of Group Brainstorming Performance
,”
Creativity Res. J.
,
12
(
3
), pp.
175
184
.
63.
Salonen
,
M.
, and
Perttula
,
M.
,
2005
, “Utilization of Concept Selection Methods: A Survey of Finnish Industry,”
ASME
Paper No. DETC2005-85047.
64.
Katsikopoulos
,
K. V.
,
2012
, “
Decision Methods for Design: Insights From Psychology
,”
ASME J. Mech. Des.
,
134
(
8
), p.
084504
.
65.
Whitaker
,
R.
,
2007
, “
Criticisms of the Analytic Hierarchy Process: Why They Often Make No Sense
,”
Math. Comput. Model.
,
46
(
7–8
), pp.
948
961
.
66.
Belton
,
V.
, and
Gear
,
T.
,
1985
, “
The Legitimacy of Rank Reversal—A Comment
,”
Omega
,
13
(
3
), pp.
143
144
.
67.
Dyer
,
J. S.
,
1990
, “
Remarks on the Analytic Hierarchy Process
,”
Manage. Sci.
,
36
(
3
), pp.
249
258
.
68.
Pérez
,
J.
,
Jimeno
,
J. L.
, and
Mokotoff
,
E.
,
2006
, “
Another Potential Shortcoming of AHP
,”
Top
,
14
(
1
), pp.
99
111
.
69.
Surowiecki
,
J.
,
2005
,
The Wisdom of Crowds
,
Anchor
, New York.
70.
Kerr
,
N. L.
, and
Tindale
,
R. S.
,
2011
, “
Group-Based Forecasting?: A Social Psychological Analysis
,”
Int. J. Forecasting
,
27
(
1
), pp.
14
40
.
71.
Amabile
,
T.
,
1982
, “
Social Psychology of Creativity: A Consensusual Assessment Technique
,”
J. Pers. Soc. Psychol.
,
43
(
5
), pp.
997
1013
.
72.
Hallihan
,
G. M.
,
Cheong
,
H.
, and
Shu
,
L.
, 2012, “Confirmation and Cognitive Bias in Design Cognition,”
ASME
Paper No. DETC2012-71258.
73.
Gill
,
H.
,
1990
, “
Adoption of Design Science by Industry—Why so Slow?
,”
J. Eng. Des.
,
1
(
3
), pp.
289
295
.
74.
Maurer, C., and Widmann, J., 2012, “
Conceptual Design Theory in Education Versus Practice in Industry: A Comparison Between Germany and the United States
,”
ASME
Paper No. DETC2012-70079.
75.
Toh
,
C.
,
Miele
,
L.
, and
Miller
,
S.
,
2015
, “Which One Should I Pick? Concept Selection in Engineering Design Industry,”
ASME
Paper No. DETC2015-46522.
76.
Jonas
,
E.
,
Schulz-Hardt
,
S.
,
Frey
,
D.
, and
Thelen
,
N.
,
2001
, “
Confirmation Bias in Sequential Information Search After Preliminary Decisions: An Expansion of Dissonance Theoretical Research on Selective Exposure to Information
,”
J. Pers. Soc. Psychol.
,
80
(
4
), p.
557
.
77.
Ask
,
K.
, and
Granhag
,
P. A.
,
2005
, “
Motivational Sources of Confirmation Bias in Criminal Investigations: The Need for Cognitive Closure
,”
J. Invest. Psychol. Offender Profiling
,
2
(
1
), pp.
43
63
.
78.
Zheng
,
X.
, and
Miller
,
S. R.
,
2016
, “How Do I Choose? the Influence of Concept Selection Methods on Student Team Decision-Making,”
ASME
Paper No. DETC2016-60333.
79.
Osborn
,
A. F.
,
Rona
,
G.
,
Dupont
,
P.
, and
Armand
,
L.
,
1971
,
The Constructive Imagination: How to Take Advantage of Its Ideas, Principles and Process of the Creative Thought and Brainstorming
,
Dunod
,
Paris, France
.
80.
Linsey
,
J. S.
,
Clauss
,
E. F.
,
Kurtoglu
,
T.
,
Murphy
,
J. T.
,
Wood
,
K. L.
, and
Markman
,
A. B.
,
2011
, “
An Experimental Study of Group Idea Generation Techniques: Understanding the Roles of Idea Representation and Viewing Methods
,”
ASME J. Mech. Des.
,
133
(
3
), p.
031008
.
81.
Carley
,
K.
,
1990
, “
Content Analysis
,”
The Encyclopedia of Language and Linguistics
,
R. E.
Asher
, ed.,
Pergamon Press
,
Edinburgh, UK
.
82.
Genco
,
N.
,
Holtta-Otto
,
K.
, and
Seepersad
,
C. C.
,
2012
, “
An Experimental Investigation of the Innovation Capabilities of Undergraduate Engineering Students
,”
J. Eng. Educ.
,
101
(
1
), pp.
60
81
.