Abstract

Prior research has demonstrated how the average characteristics of a team impact team performance. The relative contribution of team members has been largely ignored, especially in the context of engineering design. In this work, a behavioral study was conducted with 78 participants to uncover whether the most or least proficient member of a configuration design team had a larger impact on overall performance. Proficiency is an individual's ability to deal with a specific range of problem. It was found that a configuration design team is most dependent on the proficiency of its most proficient member. The most proficient member had a significant positive effect on how quickly the team reached performance thresholds and the other members of the team were not found to have the same positive impact throughout the design study. Behavioral heuristics were found using hidden Markov modeling to capture the differences in behavior and design strategy between different proficiency members. Results show that high proficiency and low proficiency team members exhibit different behavior, with the most proficient member's behavior leading to topologically simpler designs and other members adopting their designs, leading to the most proficient member driving the team design and thus the team performance. These results underscore the value of the relative contribution model in constructing engineering teams by demonstrating that different team members had unequal effects on team performance. It is shown that enhancing the most proficient member of a team is more likely to contribute to increased team performance than enhancing the least proficient member.

1 Introduction

Teams are the foundational building blocks to modern organizations [1], and engineering design, even in its simplest form, requires a diverse understanding of engineering principles and multidisciplinary knowledge [2]. Although teams are ubiquitous in engineering design, many questions remain about how to best construct one or what the composition of the team should be [3,4].

It has been well established and perhaps self-evident that the products of a team will be better with more capable members on the team [5]. Many studies, meta-analyses, and reviews have explored the average characteristics of a team [615]. For example, when specific characteristics of a team are averaged, expertise [6], personality traits [13], emotional intelligence [13], cognitive ability [14], and social sensitivity [15], among others, have been shown to have a positive effect on team performance. When assessing the average characteristics of a team, called the average contribution model, a key assumption is that all of the members affect the team equally. Alternatively, the relative contribution model asserts that overall team performance depends more on the characteristics of certain members [16]; however, this has been studied far less. The relative contribution model is primarily based on the work of Steiner, who developed a typology of team tasks. In his classes of tasks, he noted how individual members would affect the outcome of the task—of note are disjunctive tasks, where the team can only accept one individual contribution, and therefore, the best member is most influential, conjunctive tasks, where the rules dictate that the minimum effort is the criteria for success, and therefore, the worst member is most influential, and discretionary tasks, where the team can combine individual contributions in any manner they wish making the influence of members variable based on member choices during the task process [17].

With this relative composition framework, studies have shown that highest and lowest cognitive ability correlated positively to team performance [13,18], that a weak member in terms of goal orientation will drag down team performance [19], that a team with a member with low teamwork ability inhibits team process [20], and a team's minimums and maximums on a variety of personality factors were correlated with behavior that promotes efficient and effective functioning [21]. As is clear from the previous research, different characteristics offer different perspectives on whether the weakest or strongest member of a team will have a larger effect on team performance. This study examines the characteristics of task proficiency and tests whether the team with an enhanced most proficient member or the team with an enhanced least proficient member is more likely to succeed.

Proficiency is an individual's ability to deal with a specific range of problems [22] and can simply be understood as an individual's ability on the problem at hand. Task proficiency has been shown to have a significant effect on team performance when exploring the average characteristics of a team [10], making its use appropriate for this study, especially in the context of engineering design, where engineering-specific knowledge is required.

Kozlowski and Bell note that “teams do not behave, individuals do; but, they do so in ways that create team-level phenomena” [23]. This insight motivates the question of what behaviors of individual team members drive team performance. The behavior of individuals with different levels of proficiency will yield insight into why the strongest or weakest member of the team has more impact.

The effect of differences in task proficiency on designer behavior has not been explicitly explored; however, it is possible to draw comparisons to research on expert versus novice designers. Expert designers, which can be seen to have high proficiency, have been found to move to problem solution faster than novice designers, which have low proficiency. Low proficiency designers have been found to transition less between design steps and spend less time in the final steps of the design process [24,25]. It is not known if these individual behaviors translate to team design settings. Other individual team member behaviors such as social loafing, or a decrease in effort due to working in a team, [26] could be correlated with different levels of proficiency. Social loafing has been found to play varying roles in different types of tasks in Steiner's typology [27].

This work seeks to inform which member of an engineering design team drives performance. With the insight from Kozlowski and Bell, it is also desired to discover what the differences in behavior between the strongest and weakest members are, and what those behavioral differences can tell us about why a member drives team performance.

2 Methods

2.1 Overview.

The primary purpose of this behavioral study was to determine whether the most proficient or least proficient member on a configuration design team had a larger impact on team performance. To test this, participants were first asked to design alone to test their task proficiency and then strategically placed in teams of three to design collectively.

2.2 Task Description.

One common type of engineering design problem is configuration design. A configuration design task is one where the artifact being designed is constructed from a set of predefined components and obeys a set of constraints [28,29]. One type of configuration design problem, truss design, is used in this study as a familiar yet nontrivial design problem. Participants were asked to use members and nodes to design a truss which meets design constraints. Structural configuration design, such as truss design, can be broken down into three classes of actions: topology operations (adding and removing members and nodes), shape operations (manipulating the location of existing nodes), and sizing operations (manipulating the size of existing members) [30,31]. Users were able to do all three types of operations in this study. Previous studies have demonstrated that truss design is a representative configuration design task and that results found with truss design translate to other configuration design problems [31,32].

2.3 Participants.

The study was conducted twice with two populations of mechanical engineering students. The first experiment was conducted with an undergraduate stress analysis course at Carnegie Mellon University (CMU) consisting of 46 sophomore or junior mechanical engineering students. The second experiment was conducted with students from a more advanced mechanical engineering design course at CMU with 32 senior and first-year master students. The study required participants with varying levels of proficiency in truss design, which both populations offered, making their use appropriate. This resulted in 23 teams for analysis, 12 from the first population, and 11 from the second population. Section 5.1 shows that there is no significant difference in performance between the two populations.

2.4 Materials.

A graphical user interface (GUI) was used for this study, which is shown in Fig. 1. This program was written in matlab and was adapted from a program written for other studies of teams via truss construction [33,34]. The program allows participants to design a truss given starting nodes and static loads. It allows for a variety of actions, which are listed in Table 1. The table also includes the action class of structural configuration design for each action. The GUI gave feedback on the truss being designed. The mass of the truss is shown throughout the design session. When the truss is stable, the lowest factor of safety (FOS) on any member is shown. The FOS is the ratio of the relevant strength of the material to the relevant stress or force being applied to the member [35]. For this problem, the relevant strength is tensile strength if the member is in tension, or critical buckling force if the member is in compression. In addition to the lowest FOS being displayed, all members are color-coded by their FOS, which was explained to participants. At no point could the participants see the scoring metric that was used to evaluate the truss, only the FOS, and the mass. During the team sessions, the GUI allowed users to share designs freely and, if desired, select a teammate's shared design to work on. The shared designs include an image of the design along with the relevant parameters (mass and FOS).

Fig. 1
User view of the GUI during the study
Fig. 1
User view of the GUI during the study
Close modal
Table 1

Actions and action descriptions for the study

Action nameAction descriptionAction class
1. Add nodeAdd a single node to the design spaceTopology
2. Add memberAdd a member between two nodes in the design spaceTopology
3. Delete nodeDelete a single node from the design space and all members attached to itTopology
4. Delete memberDelete a single member from the design spaceTopology
5. Resize all membersResize (+/−) all members in design space by 1 unit (members at the maximum/ minimum size do not change beyond limits)Sizing
6. Resize single memberResize (+/−) a single member in the design space by 1 unit (members at the maximum/minimum size does not change beyond limits)Sizing
7. Move nodeMove a single node in the design space along with all members attached to itShape
8. Delete allDelete all user added nodes and members from the design spaceTopology
9. Switch designsSwitch design to a shared design (this action was only available during the team portion of the study)
Action nameAction descriptionAction class
1. Add nodeAdd a single node to the design spaceTopology
2. Add memberAdd a member between two nodes in the design spaceTopology
3. Delete nodeDelete a single node from the design space and all members attached to itTopology
4. Delete memberDelete a single member from the design spaceTopology
5. Resize all membersResize (+/−) all members in design space by 1 unit (members at the maximum/ minimum size do not change beyond limits)Sizing
6. Resize single memberResize (+/−) a single member in the design space by 1 unit (members at the maximum/minimum size does not change beyond limits)Sizing
7. Move nodeMove a single node in the design space along with all members attached to itShape
8. Delete allDelete all user added nodes and members from the design spaceTopology
9. Switch designsSwitch design to a shared design (this action was only available during the team portion of the study)

2.5 Procedure.

The experiment was approximately one hour long. Figure 2 shows the approximate timeline of the study. To begin, participants were introduced to the GUI through an interactive, guided tutorial. During the tutorial, all actions were demonstrated, and the design objective was explained. The objective was stated as follows: “Design a truss to meet the goal factor of safety = 1.25, with as low of a mass as possible.”

Fig. 2
Timeline of study. The numbers above each item show the minutes in each section.
Fig. 2
Timeline of study. The numbers above each item show the minutes in each section.
Close modal

After the tutorial, participants were given an individual assessment to evaluate their proficiency in truss design. The assessment consisted of three design sessions, where participants were asked to design a truss for each, increasingly difficult scenario. Participants were given an individual proficiency score by summing the scores on the three designs. The scoring metric is explained in Sec. 2.6.

After participants completed the individual assessment, teams of three were constructed. One-third of the teams had their first member randomly chosen from the participants with the highest proficiency scores. Another third of the teams had their first member randomly chosen from the participants with the lowest proficiency scores. All other team members were assigned randomly from the remaining subjects. This method of team creation was conducted in an attempt to form teams with more similar average proficiency, where no team would have more than one highest performing member or lowest performing member. As shown in Fig. 2, there were two team design sessions during the study, with each session having a different design problem. Teams were reconfigured between the two team design sessions, and no teams remained the same for both sessions.

Within a design session, the teams of three had 10 min to collectively design a truss that adequately supported the loads and met constraints. Each participant was on a different computer and did not know who the other members of their team were. Team members were free to interact with their team by sharing a design at any point during the design session. To ensure team interaction, forced interactions happened 4 min into the session and then at 90-s intervals thereafter. During a forced interaction, participants were shown the current design for all members of the team, including their own. Participants were required to choose one of the designs to continue working on, theirs or another's, before continuing. The interface is shown in Fig. 3 with an example of a team's first forced interaction. Like the freely shared designs, an image and the relevant parameters are shown.

Fig. 3
GUI interface for the forced interaction period. Every member's current design is shown and one must be chosen in order to continue.
Fig. 3
GUI interface for the forced interaction period. Every member's current design is shown and one must be chosen in order to continue.
Close modal

Participants were informed that the best score attained at any point during each part of the experiment was used as the participant's or team's score for that part of the study, which encouraged effort for the duration of the design sessions.

Participants were compensated $5 for their participation, and to encourage sustained effort, the top 10% of performers during the individual assessment received an extra $5. Participants were told that the top 10% of performers during a portion of the study would receive the extra compensation. However, they were not told which portion of the study was being evaluated.

2.6 Evaluation Metric.

A scoring metric was developed to assess the quality of trusses, which took into account the strength and the mass of the truss
Score=(MasstargetMassactual)Multiplier
(1)

The Multiplier is a disjoint function based on the FOS of the weakest truss member and is shown in Fig. 4 with the segment functions defined in Table 2. A quadratic loss function was used for the region below the FOS objective (1.25) but above an FOS of 1.0. During testing of the experiment, low performing participants struggled to reach an FOS of 1.0 during the individual assessment. Because their performance still needed to be captured, a linear function was used for the FOS region from 0.5 to 1.0 but was harshly penalized.

Fig. 4
Disjoint multiplier function used for scoring
Fig. 4
Disjoint multiplier function used for scoring
Close modal
Table 2

Segment multiplier functions

FOS rangeMultiplier function
FOS < 0.5Multiplier = 0
0.5 < FOS < 1.0Multiplier = 0.5 × FOS − 0.25
1.0 < FOS < 1.25Multiplier = 8 × (FOS − 1)2 + 0.5
1.25 < FOSMultiplier = 1.0
FOS rangeMultiplier function
FOS < 0.5Multiplier = 0
0.5 < FOS < 1.0Multiplier = 0.5 × FOS − 0.25
1.0 < FOS < 1.25Multiplier = 8 × (FOS − 1)2 + 0.5
1.25 < FOSMultiplier = 1.0

The participants were not told the target mass (Masstarget) for each design, but rather to reduce the mass of the design while maintaining an FOS greater than 1.25. The target mass was selected for each design, so that a score above 1.0, given the actual mass (Massactual), indicated that the quality of the truss was “good.”

3 Regression

The primary objective of this analysis is to establish the effect individual members, specifically the most and least proficient members, have on team performance. To give insight into the influence of each member's proficiency, team performance was regressed against the proficiency of each member on the team. Team performance was characterized by the time it took for each team to reach certain scores. The scores used as the thresholds were 0.5, 0.75, and 1.0. These thresholds were chosen as they represent the score that indicated a “good” truss (1.0) and arbitrary scores leading up to the “good” threshold.

3.1 Linear Regression.

The linear regression model proposed is shown by Eq. (2)
tx,i=β0+β1wi+β2mi+β3bi,iS
(2)

The set of teams is denoted by S, wi is the team's worst member's normalized proficiency score, mi is the team's middle member's normalized proficiency score, bi is the team's best member's normalized proficiency score, β's are the unknown coefficients, and tx,i is the time to reach a score, where the score is denoted by the subscript x. This regression model allows for the assessment of the correlation between how changes in individual member proficiency affected the team performance, which is known as the marginal effect, namely the effect of an individual member.

3.2 Tobit Regression.

One challenge in using the time to reach scoring thresholds as the team performance metric is that to include all teams in the ordinary least squares regression analysis, all teams had to reach the thresholds. Unfortunately, for the 1.0 score threshold, the low performing teams did not reach that score in the allotted time.

To accommodate the low performing teams, a Tobit regression model was used [36]. Tobit regression was developed to accommodate censored data points in regression. A censored data point is one whose value is only partially known. In this case, it is known that the upper limit for measurement of tx,i is 600 (the length of the design session). It would take a team that has a tx,i of 600 longer than 600 s to reach the score threshold, but it is not known how long it would take. Censored data cannot be included in an ordinary least squares regression model as it will bias the coefficients due to the censored dependent variable.

Tobit regression uses a likelihood model to represent both uncensored and censored data points in a regression function. Uncensored data points are represented by their probability density function (PDF) and censored data points are represented by their cumulative distribution function (CDF). The Tobit model used for this study is shown by Eqs. (3) and (4)
I(t1.0,i)={0ift1.0,i6001ift1.0,i<600
(3)
L(β0,β1,β2,β3,σ)=j=1n(1σφ(t1.0,i(β0+β1wi+β2mi+β3bi)σ))I(t1.0,i)(1Φ((β0+β1wi+β2mi+β3bi)600σ))1I(t1.0,i)
(4)

In this model, φ is the PDF of the standard normal distribution, Φ is the CDF of the standard normal distribution, and σ is the standard deviation. The remaining variables have the same definition as Eq. (2).

It is important to note that the coefficient estimates in the Tobit model do not imply the marginal effect of the independent variables as they do in ordinary least squares regression. Because a likelihood model is used and censored data are included, there is a different interpretation. Specifically, the marginal effect of the Tobit model can be interpreted by Eq. (5)
dt1.0,idX=Φ(β0+β1wi+β2mi+β3bi)σ)β
(5)

The X in the denominator of Eq. (5) represents the set of independent variables (wi, mi, and bi). All other variables are consistent with Eqs. (2)(4). This change in the team performance metric, t1.0,i, with respect to the change in the team member's proficiency, is equal to the coefficient estimates, β, multiplied by the probability of being above the upper limit [37]. The average marginal effects are found and used in the results section, meaning that the marginal effects from ordinary least squares regression and Tobit regression can be compared as they are measuring the same thing.

4 Hidden Markov Modeling

A hidden Markov model (HMM) is a statistical model that represents a first-order Markov process with hidden states [38]. Unlike a simple Markov process, where the observations are the state, an HMM's states are represented by a probability distribution of emitting the observable tokens. With this probability distribution, an HMM is represented by a transition matrix T and emission matrix E. The T matrix is an n × n size matrix where n represents the number of hidden states. It shows the probability of the next state given the current state. For example, Tij represents the probability of transition from state i to state j. The E matrix is an n × a size matrix where a represents the number of observable tokens. For example, Eij represents the probability of emitting the observable token j given state i. In the context of this study, the observable tokens are the actions taken by the designer listed in Table 1.

HMMs have been used effectively in problem-solving contexts to uncover the cognitive processes involved [3941]. Additionally, an HMM has been used on the same truss design problem to study overall team behavior and was found to successfully model the heuristic design process that design teams used [31]. In the current study, the individual team members’ design process will be evaluated using the HMM, as opposed to that of the team. This use of HMMs will help determine the strategy and behavior of the high and low proficiency members of the configuration design team.

4.1 Baum-Welch Algorithm.

The data needed to determine the T and E matrices are the number of hidden states and a set of sequences of emissions, which in the case of this study are sequences of actions taken by the designers. The Baum-Welch algorithm is an estimation maximization technique used to determine the maximum likelihood estimate of the unknown parameters, which in an HMM's case are the T and E matrices [38,42].

4.2 Viterbi Algorithm.

Another useful tool in an HMM is determining the most likely state path given the emission sequence, T matrix, and E matrix. The Viterbi algorithm is a maximum likelihood decoder used to solve this state path [43,44]. A full description of the algorithm can be found in Ref. [44].

5 Results

5.1 Data Characteristics.

Two populations of participants were used to increase sample size and generalizability. To ensure the two populations of participants could be combined, they were compared using the Kruskal–Wallis test. There is found to be no significant difference between the groups when comparing individual assessment scores (p = 0.4153, n = 78), the mean assessment score of the teams created (p = .5382, n = 23), and scores for the team sections of the study (p = 0.8055, n = 23). With this finding, the two populations are normalized within their respective group and then combined.

5.2 Regression Analysis.

The regression model results in Table 3 demonstrate the effect different members have throughout the study. Linear regression was used for the 0.5 and 0.75 score thresholds, while Tobit regression was used for the 1.0 score threshold. In this model, a negative correlation indicates a shorter time to reach score thresholds with higher proficiency, and therefore a positive effect.

Table 3

Marginal effect estimates for regression models

Regression modelt0.5,tt0.75,tt1.0,t
n232323# (3)
Low proficiency member marginal effect5.45 (24.25)−39.72 (49.01)−130.95* (59.84)
Mid proficiency member marginal effect−8.19 (15.42)−24.51 (29.92)−2.38 (36.23)
High proficiency member marginal effect−48.33* (16.99)−72.25* (32.98)−138.12** (30.25)
Regression modelt0.5,tt0.75,tt1.0,t
n232323# (3)
Low proficiency member marginal effect5.45 (24.25)−39.72 (49.01)−130.95* (59.84)
Mid proficiency member marginal effect−8.19 (15.42)−24.51 (29.92)−2.38 (36.23)
High proficiency member marginal effect−48.33* (16.99)−72.25* (32.98)−138.12** (30.25)

Note: Each column represents a different regression model. The standard error for each estimate is shown in parenthesis below each estimate. Asterisks denote 1 (**) and 5 (*) percent significance level. A pound sign (#) indicates that Tobit regression was utilized, and the number of censored data points are shown in parenthesis next to the total number of data points.

As seen in Table 3, not all members are shown to have a significant effect on team performance throughout the design session. The only member to have a significant positive effect at each threshold is the most proficient, as seen in the bottom row of Table 3. The only other significant relationship in the regression model is the lowest proficiency member at the 1.0 score threshold. While this implies that the proficiency of the worst member correlates with the time it takes to reach a score of 1.0, the lack of significant relationship at lower thresholds could suggest that the low proficiency member only impacts the team after they have been influenced by other members of their team. Further analysis, which is found in future sections, is needed to determine if this is true.

What is important with these results is the unequal effect that different members have. The best member's proficiency has a positive correlation with team performance at every threshold, and the only other significant correlation is with the worst member's proficiency at the 1.0 score threshold. Based on the assumptions made by previous research using the average contribution model, one would expect that an increase in the proficiency of any member would correlate with an increase in team performance. That is not what is found by this analysis. With this finding, the relative contribution model seems to be well suited for constructing configuration design teams.

Additionally, if the correlation that the most proficient member has with team performance is indicative of a causal relationship, it would imply that improving the most proficient member would have a more significant effect on team performance than improving any other member with a more proficient member of the same change in proficiency. Further analysis is required to determine causality, which is shown in subsequent sections.

5.3 Behavioral Analysis.

The regression results alone demonstrate the relationship that the most proficient member has with team performance; however, it is desired to further explain these results by determining differences in design strategy and behavior between the high and low proficiency members in the team design sessions. Discovering the underlying behaviors exhibited by the members of the team could support the regression results found in Sec. 5.2 and would allow a stronger case to be made for causal effects. HMMs were used to determine these design behavior heuristics.

5.3.1 Determining Number of Hidden States for Hidden Markov Model.

Previous HMM analysis of truss design determined that four hidden states accurately depict the team behavior throughout the design sessions [31]. Figure 5 confirms these findings for individual designer behavior. Leave-one-out cross-validation (LOOCV) was used to determine the log-likelihood of the left-out sequence being predicted by the HMM that was trained without it. The model with four hidden states does not have significantly less predictive power than the most complex model, thus balancing parsimony and accuracy.

Fig. 5
Testing log-likelihood with different number of hidden states
Fig. 5
Testing log-likelihood with different number of hidden states
Close modal

5.3.2 Determining Number of Segments of Team Design Session.

To get a better grasp of why different proficiency members have different impacts throughout the study, the team design session was broken down to see what design strategies were utilized in different portions of the session.

There is no obvious answer to how the team design session should be segmented. Similar to when determining how many states the HMM should have, LOOCV was used to test the log-likelihood of the trained HMM predicting the left-out test sequence. The difference in this analysis is that instead of different numbers of hidden states in the HMMs, it was the number of segments in the study that is being tested. Because when segmenting the study, the action sequences have different lengths and longer sequences will have a lower log-likelihood, the testing log-likelihood was normalized by the average number of actions in each sequence. The results of this analysis are shown in Fig. 6.

Fig. 6
LOOCV testing log-likelihood per emission based on the number of segments the study is broken into
Fig. 6
LOOCV testing log-likelihood per emission based on the number of segments the study is broken into
Close modal

There is no statistically significant difference in the log-likelihood per emission, regardless of how many segments the study is broken up into. It is common to break the analysis into quartiles, and because there is no difference in predictive power, regardless of how many segments the design session is broken into, it was decided to divide the 10-min design session into four equal segments of 2.5 min each to see how behavior changes over time. To ensure the accuracy of findings, a sensitivity analysis was conducted, where the design session was broken into three and five segments. Results from the three- and five-segment analysis support the same conclusions that are discussed in Secs. 5.3.3 and 6.

5.3.3 Hidden Markov Models for Segmented Design Session.

After determining that four hidden states were appropriate and upon dividing the action sequences of the designers into quartiles, HMMs were trained for the highest proficiency and lowest proficiency team members on the four action sequences. Sample E and T matrices for the most and least proficient team members for the first quartile of the study are shown in Figs. 7 and 8.

Fig. 7
Emission matrices for high and low proficiency members during the first quartile of the design session
Fig. 7
Emission matrices for high and low proficiency members during the first quartile of the design session
Close modal
Fig. 8
Transition matrices for high and low proficiency members during the first quartile of the design session
Fig. 8
Transition matrices for high and low proficiency members during the first quartile of the design session
Close modal

It is beneficial to note which types of moves are being found in each state and to give a name to these states for ease of interpretation. As seen in Fig. 7, when a designer is in State 1 and State 2, they have nearly 100% probability of emitting a topology operation (Add Joint, Add Member, Delete Joint, and Delete Member) and are the only two states that have a significant probability of emitting a topology operation. State 1 focuses on adding topological elements and is named “Build Topology.” State 2 has a mixture of topological elements and is named “Mixed Topology.” State 3 has both sizing operations (Resize All Members) and shape operations (Move Node) and is therefore named: “Shape and Broad Sizing.” State 4 is nearly entirely fine-tuning sizing operation (Resize Single Member) and is named “Fine Sizing.”

Figures 7 and 8 show that although there are subtle differences in the matrices for the high and low proficiency members during different segments of the study, these matrices do not reveal much information on their own. With the E and T matrices, the Viterbi algorithm was used to determine the expected proportion of time-steps that were in each state. These results can be seen in Table 4.

Table 4

Proportion of time-steps in each state of the hidden Markov models on segments of the design session for the high and low proficiency members of design teams

Proportion of time-steps in states: 0–150 sProportion of time-steps in states: 150–300 sProportion of time-steps in states: 300–450 sProportion of time-steps in states: 450–600 s
StateHigh proficiencyLow proficiencyStateHigh proficiencyLow proficiencyStateHigh proficiencyLow proficiencyStateHigh proficiencyLow proficiency
1. Build topology**0.319 (0.045)0.601 (0.061)1. Build topology0.239 (0.045)0.268 (0.040)1. Build topology**0.141 (0.040)0.031 (0.007)1. Build topology0.134 (0.037)0.071 (0.020)
2. Mixed topology**0.199 (0.029)0.081 (0.020)2. Mixed topology0.010 (0.013)0.091 (0.014)2. Mixed topology0.199 (0.043)0.187 (0.041)2. Mixed topology0.069 (0.023)0.135 (0.032)
3. Shape and broad sizing0.077 (0.021)0.112 (0.030)3. Shape and broad sizing*0.085 (0.023)0.202 (0.045)3. Shape and broad sizing*0.169 (0.043)0.332 (0.068)3. Shape and broad sizing*0.145 (0.041)0.345 (0.062)
4. Fine sizing*0.405 (0.054)0.205 (0.053)4. Fine sizing0.577 (0.050)0.439 (0.071)4. Fine sizing0.491 (0.049)0.450 (0.069)4. Fine sizing*0.653 (0.056)0.450 (0.068)
Proportion of time-steps in states: 0–150 sProportion of time-steps in states: 150–300 sProportion of time-steps in states: 300–450 sProportion of time-steps in states: 450–600 s
StateHigh proficiencyLow proficiencyStateHigh proficiencyLow proficiencyStateHigh proficiencyLow proficiencyStateHigh proficiencyLow proficiency
1. Build topology**0.319 (0.045)0.601 (0.061)1. Build topology0.239 (0.045)0.268 (0.040)1. Build topology**0.141 (0.040)0.031 (0.007)1. Build topology0.134 (0.037)0.071 (0.020)
2. Mixed topology**0.199 (0.029)0.081 (0.020)2. Mixed topology0.010 (0.013)0.091 (0.014)2. Mixed topology0.199 (0.043)0.187 (0.041)2. Mixed topology0.069 (0.023)0.135 (0.032)
3. Shape and broad sizing0.077 (0.021)0.112 (0.030)3. Shape and broad sizing*0.085 (0.023)0.202 (0.045)3. Shape and broad sizing*0.169 (0.043)0.332 (0.068)3. Shape and broad sizing*0.145 (0.041)0.345 (0.062)
4. Fine sizing*0.405 (0.054)0.205 (0.053)4. Fine sizing0.577 (0.050)0.439 (0.071)4. Fine sizing0.491 (0.049)0.450 (0.069)4. Fine sizing*0.653 (0.056)0.450 (0.068)

Note: The standard error for each estimate is shown in parenthesis. Asterisks denote 1 (**) and 5 (*) percent significance level for the difference in the proportion of steps in a state during a quartile of the design session between the high and low proficiency members of the team.

Table 4 reveals the differences in the design strategy used by the different team members in the study; however, it can be seen that there are many non-significant differences. The first quartile has the most significant differences, where the least proficient team member is spending a significantly higher proportion of steps in Build Topology (State 1) and less time in Mixed Topology (State 2) and Fine Sizing (State 4). After the first quarter of the design session, the least proficient member is spending a significantly higher proportion of time-steps in Shape and Broad Sizing (State 3). This demonstrates that the least proficient member is spending more time doing shape operations or broad-stroke sizing operations than the most proficient member.

In quartile 3, the most proficient member was found to spend significantly more time in Build Topology (State 1), but because both State 1 and State 2 emit building topology operations, it cannot be said that they are doing significantly more topology actions.

The last significant finding is that during the final quarter of the design session, the most proficient member has a significantly higher proportion of their time-steps in Fine Sizing (State 4) than the least proficient member of the team.

5.3.4 Topology Comparison Throughout Design Session.

One of the most intriguing findings from the analysis of the HMMs is the many differences during the first quartile. Specifically, the most proficient members spend fewer time-steps building their truss's topology. To assess this further, the topology of trusses that were being designed by the most proficient and least proficient members were assessed. Figure 9 shows the average number of members and the average number of nodes in the trusses being designed by each of the different proficiency members of the team.

Fig. 9
Mean number of (a) members and (b) nodes of most and least proficient team members throughout the study
Fig. 9
Mean number of (a) members and (b) nodes of most and least proficient team members throughout the study
Close modal

Initially, both most and least proficient members spend time building the topology of their designs; however, as time progresses, the least proficient member continues to add to the topology of their truss while the most proficient member reduces their adding of topology. Between 100 and 150 s into the study, the average number of members and the average number of nodes in each member's design are significantly different. This result fits with the findings from Table 4 that show that the least proficient member spends more time in State 1 during the first quarter of the study, which entails adding members and nodes to the design. The least proficient members’ topology begins to drop before the first forced interaction, demonstrating that the teams are productively sharing ideas without prompting. After the first forced interaction, the different members of the team are working on similarly complex truss topologies. Thus, the more proficient members are creating less intricate structures, which the team appears to converge to.

5.3.5 Topological Origins of Team Designs.

While the results from Fig. 9 appear to show that the least proficient members were converging to the topology of their most proficient teammates, this cannot be explicitly known from the previous analysis. Later in the study, the number of members and nodes in the trusses being designed by the different proficiency members converges to similar averages. While interaction occurs in many teams before the first forced interaction, it is seen that the least proficient member's average number of topological elements drops precipitously just before and during the first forced interaction toward the most proficient member's average. This implies that the least proficient member chooses to switch away from their original designs to work on the topologically simpler trusses of the most proficient team member.

To determine if this inference is accurate, the designs that each member worked on during the study were tracked. It is possible to determine which member initially created the topology of a truss being designed by tracking when members switched designs and whose designs they switched to.

A clear indication of the impact a member has in the study is whether anyone on a team is working on a topology created by that member. There is a stark difference between the most and the least proficient team members in this metric. After the first forced interaction, at 240 s into the study, the least proficient members’ topology is being worked on in only 30% of teams. The most proficient members’ topology is being worked on by someone on the team in 74% of teams (these two percentages add up to greater than 100% because individual members can work on different designs). This is a statistically significant difference (p = 0.0025, n = 23). In addition to this result, the most proficient member creates the original topology for the team's best design more than any other member of the team. Though not a statistically significant result, this again hints that the most proficient member has a greater effect on team performance due to creating the basic structure for the team's final design more than any other member.

This analysis of truss topological origins confirms that the least proficient member of the team is often switching to the truss designs that were made by their more proficient teammates, which was hinted at by the analysis in Sec. 5.3.4.

5.3.6 Final Team Designs and Late Stage Behavior.

The previous analysis focused on the influence early in the design session. Evaluating which member created the team's final design is also an important factor, as it will help support the findings from the regression analysis for higher score thresholds. It was found that the most proficient member created the final team design more often than any other member; however, the difference was not significant, much like the finding that the most proficient member created the original topology for the team's best design more than any other member on the team.

Another finding in this analysis is that the member who creates the initial topology for the final design is not always the same member that uses that initial topology to create the team's best design. In 8 of the 23 teams that were analyzed, the member who created the initial topology for the final design and the member who used that topology to create the final design were different team members. In 15 of 23 teams, different members held the best score after the team members had switched to the same design. Finally, only 2 of 23 teams had the same member hold the best score during the entire design session. These findings confirm that the teams are successfully interacting and benefitting from the other members of the team by improving designs made by other members.

Additionally, the number of moves by the different proficiency members were assessed to determine if different proficiency members exerted less effort during the design session, also known as social loafing. It was not found that the less proficient members performed fewer moves compared to their most proficient teammates, or vice versa, and based on the result that the least proficient member correlated with team performance at higher thresholds, it was determined that social loafing did not occur.

6 Discussion

The results from this study demonstrate that the performance of a configuration design team is more dependent on the proficiency of the team's most proficient member than that of the least proficient member. This result suggests that previous research findings, which the average characteristics of a team dictate team performance, do not tell the whole story and that using task proficiency in the relative contribution model is beneficial for constructing engineering design teams.

The strength of this result comes from the combined regression and individual behavior analysis. The regression models found that the most proficient member had a significant positive marginal effect on how quickly the configuration design team reached performance thresholds. Only at higher performance thresholds did the least proficient member have a significant effect. Because only the most proficient member had a significant correlation with team performance at lower thresholds, it is posited that the most proficient member is developing the basis for the team's design and thus driving their performance throughout the design session.

To further validate these findings and to identify the underlying mechanics behind this result, the behavior and design strategy of the individuals was assessed. It was found that different design strategies were utilized by the most and least proficient team members. Specifically, the most proficient member spent less time building the initial topology of their structure, more time fine-tuning their design with sizing operations at the end of the design sessions, and moved more quickly to shape and sizing operations than their least proficient team members. Although the high proficiency members in this study should not be considered experts in truss design due to their limited experience, this result matches what has been previously found with expert designers when compared to novice designers, in that expert designers move more quickly to design solutions and spend more time on the final steps of the design process (optimizing the truss through fine-tuning in this case) [25]. Social loafing was not found to be a factor in this experiment. Although it is a common phenomenon in team settings, it is believed that the short design sessions and small team sizes reduced the likelihood of social loafing occurring.

Finally, the least proficient team members, on average, were found to converge toward the topology of the most proficient team members. It was found that after the first forced interaction, the least proficient members’ designs were only being worked on in 30% of the teams. In contrast, the most proficient members’ original topology was being worked on in 74% of the teams at the same time interval. These findings confirmed the theory that the most proficient member was more likely to create the foundation for the team's eventual final design solution and therefore drove the team's performance.

In summary, these results do not demonstrate that the least proficient member is holding the team back; they only demonstrate that the most proficient member is driving team performance. Our results suggest that replacing the most proficient member of the team with an even more proficient member would have a more positive impact than replacing any other member in a configuration design team, including replacing a low proficiency member with a higher proficiency member of the same change in proficiency. Thus, how individual team members change the average proficiency of a team will affect the change in team performance in different and unequal ways, which is a more nuanced result than has previously been found.

These results also have implications for configuration design. Due to the nature of the task and the team structure, configuration design falls into the category of a discretionary task under Steiner's typology. A discretionary task allows teams to combine their individual contributions in any manner they wish. It is shown that this is what happened in this study, as the final output is a team-influenced product and not an individual effort. Designers throughout the study made choices such as switching away from their design, but then improving upon the design they switched to. In over a third of teams, the original topology for the team's final design was made by a member who did not create the team's best final product, and more than half the teams had different members hold the best score at some point after the members had switched to the same design. These results suggest that configuration design is a discretionary task as member choices throughout the design session determined which member had the most influence.

The problem of truss design is a familiar, yet nontrivial design problem, and it is representative of configuration design. Previous studies have found that insights on teams transfer across configuration design problems [31,32]. With these previous findings and the findings of Steiner and others [17,45], we expect results from this study to transfer to similar configuration design problems and configuration design teams. Future studies are necessary to verify this and determine whether these results transfer to other types of engineering design tasks and different team structures as well.

There are limitations to this study; however, the limitations offer promising avenues for future work. In this study, designers were students, the design sessions were short, and the communication was limited to the sharing of designs rather than verbal or text communication. These limitations enabled a close examination of the effect that member proficiency had on team performance, although there could be confounding variables that are not assessed fully in this study.

7 Conclusions

This study demonstrated that previous research on the average characteristics of teams does not provide adequate insight into how changes in team composition will affect team performance. Instead, this study validated the usefulness of the relative contribution model for team composition and the effect that individual task proficiency has in configuration design teams. Our results showed that a configuration design team's performance is most affected by the proficiency of its most proficient member. Thus, to most improve a configuration design team's performance, changes in team composition should focus on increasing the highest proficiency found on the team. Whether this relationship holds for other types of design problems should be further examined.

The behavior of the different proficiency members was assessed in this study as well. It was found that high proficiency members spent less time building the topology of their design early in the design session and more time doing fine-tune operations late in the design session. Lower proficiency members were found to switch away from their original designs to work on the topologically simpler designs of the more proficient team members. These differences complemented the regression results, demonstrating how the most proficient member of the configuration design team drove team performance.

This study concluded how individual team member proficiency affects team performance; however, it is known that other individual characteristics may have different effects. Using the relative contribution model to examine other individual characteristics of team members, such as personality traits or emotional intelligence, and how changes at the individual member level impact team performance are prudent and could be the focus of future work. Another avenue for future work is expanding to different design problems and different team structures. Understanding how individual proficiency and other individual team member characteristics influence larger teams or teams solving a design problem that involves subtasks is a promising area of future exploration.

Acknowledgment

This material is based on work supported by the United States Air Force Office of Scientific Research (ASFOR) (Grant No. FA9550-18-1-0088). Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the sponsors.

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The data sets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

References

1.
Levi
,
D.
, and
Askay
,
D. A.
,
2021
,
Group Dynamics for Teams
,
SAGE
,
Thousand Oaks, CA
.
2.
Hassannezhad
,
M.
,
Cantamessa
,
M.
,
Montagna
,
F.
, and
John Clarkson
,
P.
,
2019
, “
Managing Sociotechnical Complexity in Engineering Design Projects
,”
ASME J. Mech. Des.
,
141
(
8
), p.
081101
. 10.1115/1.4042614
3.
Schunn
,
C. D.
,
Paulus
,
P. B.
,
Cagan
,
J.
, and
Wood
,
K.
,
2006
, The Scientific Basis of Individual and Team Innovation and Discovery.
4.
Driskell
,
J. E.
,
Salas
,
E.
, and
Driskell
,
T.
,
2018
, “
Foundations of Teamwork and Collaboration
,”
Am. Psychol.
,
73
(
4
), pp.
334
348
. 10.1037/amp0000241
5.
Bass
,
B.
,
1982
, “Individual Capability, Team Performance, and Team Productivity,”
Human Performance and Productivity: Human Capability Assessment
,
M. D.
Dunnette
, and
E. A.
Fleishman
, eds.,
Lawrence Erlbaum Associates, Publishers
,
Hillsdale, NJ
, pp.
179
232
.
6.
Stewart
,
G. L.
,
2006
, “
A Meta-analytic Review of Relationships Between Team Design Features and Team Performance
,”
J. Manage.
,
32
(
1
), pp.
29
55
. 10.1177/0149206305277792
7.
Mathieu
,
J.
,
Maynard
,
T. M.
,
Rapp
,
T.
, and
Gilson
,
L.
,
2008
, “
Team Effectiveness 1997–2007: A Review of Recent Advancements and a Glimpse Into the Future
,”
J. Manage.
,
34
(
3
), pp.
410
476
. 10.1177/0149206308316061
8.
Campion
,
M. A.
, and
Medsker
,
G. J.
,
1993
, “
Relations Between Work Group Characteristics and Effectiveness: Implications for Designing Effective Work Groups
,”
Pers. Psychol.
,
46
(
4
), pp.
823
847
. 10.1111/j.1744-6570.1993.tb01571.x
9.
Kahan
,
J. P.
,
Webb
,
N.
,
Shavelson
,
R. J.
, and
Stolzenberg
,
R. M.
,
1985
, Individual Characteristics and Unit Performance: A Review of Research and Methods. The Rand Corporation, Office of the Assistant Secretary of Defense for Manpower, Installations, and Logistics.
10.
Salas
,
E.
,
Cannon-Bowers
,
J. A.
, and
Blickensderfer
,
E. L.
,
1993
, “
Team Performance and Training Research: Emerging Principles
,”
J. Washingt. Acad. Sci.
,
83
(
2
), pp.
81
106
.
11.
Mathieu
,
J. E.
,
Gallagher
,
P. T.
,
Domingo
,
M. A.
, and
Klock
,
E. A.
,
2019
, “
Embracing Complexity: Reviewing the Past Decade of Team Effectiveness Research
,”
Annu. Rev. Organ. Psychol. Organ. Behav.
,
6
(
1
), pp.
17
46
. 10.1146/annurev-orgpsych-012218-015106
12.
Hülsheger
,
U. R.
,
Anderson
,
N.
, and
Salgado
,
J. F.
,
2009
, “
Team-Level Predictors of Innovation at Work: A Comprehensive Meta-analysis Spanning Three Decades of Research
,”
J. Appl. Psychol.
,
94
(
5
), pp.
1128
1145
. 10.1037/a0015978
13.
Bell
,
S. T.
,
2007
, “
Deep-Level Composition Variables as Predictors of Team Performance: A Meta-analysis
,”
J. Appl. Psychol.
,
92
(
3
), pp.
595
615
. https://doi.org/10.1037/0021-9010.92.3.595
14.
DeChurch
,
L. A.
, and
Mesmer-Magnus
,
J. R.
,
2010
, “
The Cognitive Underpinnings of Effective Teamwork: A Meta-Analysis
,”
J. Appl. Psychol.
,
95
(
1
), pp.
32
53
. https://doi.org/10.1037/a0017328
15.
Woolley
,
A. W.
,
Chabris
,
C. F.
,
Pentland
,
A.
,
Hashmi
,
N.
, and
Malone
,
T. W.
,
2010
, “
Evidence for a Collective Intelligence Factor in the Performance of Human Groups
,”
Science
,
330
(
6004
), pp.
686
588
. https://doi.org/10.1126/science.1193147
16.
Mathieu
,
J. E.
,
Tannenbaum
,
S. I.
,
Donsbach
,
J. S.
, and
Alliger
,
G. M.
,
2014
, “
A Review and Integration of Team Composition Models: Moving Toward a Dynamic and Temporal Framework
,”
J. Manag.
,
40
(
1
), pp.
130
160
. https://doi.org/10.1177/0149206313503014
17.
Steiner
,
I. D.
,
1972
, “Group Performance of Unitary Tasks,”
Group Process and Productivity
,
Academic Press
,
New York, NY
, pp.
14
39
.
18.
Devine
,
D. J.
, and
Philips
,
J. L.
,
2001
, “
Do Smarter Teams Do Better: A Meta-analysis of Cognitive Ability and Team Performance
,”
Small Gr. Res.
,
32
(
5
), pp.
507
532
. https://doi.org/10.1037/e413782005-175
19.
Valcea
,
S.
,
Hamdani
,
M.
, and
Bradley
,
B.
,
2019
, “
Weakest Link Goal Orientations and Team Expertise: Implications for Team Performance
,”
Small Gr. Res.
,
50
(
3
), pp.
315
347
. https://doi.org/10.1177/1046496418825302
20.
Felps
,
W.
,
Mitchell
,
T. R.
, and
Byington
,
E.
,
2006
, “
How, When, and Why Bad Apples Spoil the Barrel: Negative Group Members and Dysfunctional Groups
,”
Res. Organ. Behav.
,
27
(
6
), pp.
175
222
. https://doi.org/10.1016/s0191-3085
21.
Raver
,
J. L.
,
Ehrhart
,
M. G.
, and
Chadwick
,
I. C.
,
2012
, “
The Emergence of Team Helping Norms: Foundations Within Members’ Attributes and Behavior
,”
J. Organ. Behav.
,
33
(
5
), pp.
616
637
. https://doi.org/10.1002/job.772
22.
Bass
,
B. M.
,
1980
, “
Team Productivity and Individual Member Competence
,”
Small Gr. Res.
,
11
(
4
), pp.
431
504
.
23.
Kozlowski
,
S. W. J.
, and
Bell
,
B. S.
,
2003
, “Work Groups and Teams in Organizations,”
The Handbook of Organizational Culture and Climate
,
W. C.
Borman
,
D. R.
Ilgen
,
R. J.
Klimoski
, and
I. B.
Weiner
, eds.,
John Wiley & Sons
,
Hoboken, NJ
, pp.
333
376
.
24.
Atman
,
C. J.
,
Chimka
,
J. R.
,
Bursic
,
K. M.
, and
Nachtmann
,
H. L.
,
1999
, “
A Comparison of Freshman and Senior Engineering Design Processes
,”
Des. Stud.
,
20
(
2
), pp.
131
152
.
25.
Cross
,
N.
,
2004
, “
Expertise in Design: An Overview
,”
Des. Stud.
,
25
(
5
), pp.
427
441
. https://doi.org/10.1016/j.destud.2004.06.002
26.
Karau
,
S. J.
, and
Williams
,
K. D.
,
1993
, “
Social Loafing: A Meta-analytic Review and Theoretical Integration
,”
J. Pers. Soc. Psychol.
,
65
(
4
), pp.
681
706
.
27.
Harkins
,
S. G.
, and
Petty
,
R. E.
,
1982
, “
Effects of Task Difficulty and Task Uniqueness on Social Loafing
,”
J. Pers. Soc. Psychol.
,
43
(
6
), pp.
1214
1229
.
28.
Mittal
,
S.
, and
Frayman
,
F.
,
1989
, “
Towards a Generic Model of Configuration Tasks
,”
Proc. Elev. Int. Jt. Conf. Artif. Intell.
,
2
(
1
), pp.
1395
1401
.
29.
Wielinga
,
B.
, and
Schreiber
,
G.
,
1997
, “
Configuration-Design Problem Solving
,”
AI Des.
,
12
(
2
), pp.
49
56
. 10.1109/64.585104
30.
Christensen
,
P. W.
, and
Klarbring
,
A.
,
2008
,
An Introduction to Structural Optimization
,
Springer Science & Business Media
,
New York, NY
.
31.
McComb
,
C.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2017
, “
Mining Process Heuristics From Designer Action Data Via Hidden Markov Models
,”
ASME J. Mech. Des.
,
139
(
11
), p.
111412
. 10.1115/1.4037308
32.
McComb
,
C.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2017
, “
Optimizing Design Teams Based on Problem Properties: Computational Team Simulations and an Applied Empirical Test
,”
ASME J. Mech. Des.
,
139
(
4
), p.
041101
. 10.1115/1.4035793
33.
McComb
,
C.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2015
, “
Rolling With the Punches: An Examination of Team Performance in a Design Task Subject to Drastic Changes
,”
Des. Stud.
,
36
(
1
), pp.
99
121
. https://doi.org/10.1016/j.destud.2014.10.001
34.
McComb
,
C.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2015
, “
Lifting the Veil: Drawing Insights About Design Teams From a Cognitively-Inspired Computational Model
,”
Des. Stud.
,
40
(
1
), pp.
119
142
. 10.1016/j.destud.2015.06.005
35.
Juvinall
,
R. C.
, and
Marshek
,
K. M.
,
2012
,
Fundamentals of Machine Component and Design
,
John Wiley & Sons
,
Hoboken, NJ
.
36.
Tobin
,
J.
,
1958
, “
Estimation of Relationships for Limited Dependent Variables
,”
Econometrica
,
26
(
1
), p.
24
.
37.
McDonald
,
J. F.
, and
Moffitt
,
R. A.
,
1980
, “
The Uses of Tobit Analysis
,”
Rev. Econ. Stat.
,
62
(
2
), pp.
318
321
.
38.
Leonard
,
E. B.
,
Ted
,
P.
,
George
,
S.
, and
Norman
,
W.
,
1970
, “
A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of Markov Chains
,”
Ann. Math. Stat.
,
41
(
1
), pp.
164
171
.
39.
Tenison
,
C.
, and
Anderson
,
J. R.
,
2016
, “
Modeling the Distinct Phases of Skill Acquisition
,”
J. Exp. Psychol. Learn. Mem. Cogn.
,
42
(
5
), pp.
749
767
. https://doi.org/10.1037/xlm0000204
40.
Anderson
,
J. R.
, and
Fincham
,
J. M.
,
2014
, “
Discovering the Sequential Structure of Thought
,”
Cogn. Sci.
,
38
(
2
), pp.
322
352
. https://doi.org/10.1111/cogs.12068
41.
Goucher-Lambert
,
K.
, and
McComb
,
C.
,
2019
, “
Using Hidden Markov Models to Uncover Underlying States in Neuroimaging Data for a Design Ideation Task
,”
Proc. Des. Soc. Int. Conf. Eng. Des.
,
1
(
1
), pp.
1873
1882
. https://doi.org/10.1017/dsi.2019.193
42.
Bilmes
,
J. A.
,
1998
,
A Gentle Tutorial of the EM algorithm and its application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models
, Technical Report Number TR-97-021,
ICSI
.
43.
Viterbi
,
A. J.
,
1967
, “
Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm
,”
IEEE Trans. Inf. Theory
,
13
(
2
), pp.
260
269
.
44.
Forney
,
G. D.
,
1973
, “
The Viterbi Algorithm
,”
Proc. IEEE
,
61
(
3
), pp.
268
278
.
45.
McGrath
,
J. E.
,
1984
, “A Typology of Tasks,”
Groups: Interaction and Performance
,
Prentice-Hall
,
Englewood Cliffs, NJ
, pp.
53
66
.