Abstract

Product evaluation throughout the design process is a fundamental task for product success, which also helps to reduce design-related costs. Physical prototyping is a common method to assess design alternatives, but often requires significant amounts of time and money. Extended reality (XR) technologies are changing how products are presented to the user, making virtual prototyping an effective tool for product evaluation. However, it is generally assumed that our perceptual and emotional responses to a product viewed in an XR modality are comparable to those elicited by the physical product. This paper reports the results of a study where a group of participants evaluated three designs of a product (i.e., umbrella stands) when viewed in a real setting, virtual reality (VR), and VR with passive haptics. Our goal was to observe the influence of visual media in product perception, and how the use of a complementary item (i.e., a physical umbrella) for interaction as well as user design expertise influence product assessment. Results show that the Jordan’s psycho-pleasure category of assessment was the most affected by the presentation medium, whereas the ideo-pleasure category was the only category not influenced by the medium. We also highlight that the use of VR with passive haptics could be an effective tool for product evaluation, as illustrated by the study of umbrella stands and young consumers. Our study also shows that the user’s background does not influence the level of confidence in their responses, but it can influence the assessment of certain product features. Finally, the use of a complementary item for interaction may have a significant effect on product perception.

Introduction

Conceptual design is a critical phase in the product development process to determine the cost and quality of a new product. Getting feedback from potential users during this phase is important to identify the needs and issues that must be addressed in concept validation [1] and to fulfill design goals [2], as design changes are relatively inexpensive and easy to perform at this stage. The cost of making a design change increases dramatically as the product moves through its lifecycle [3].

During the product development process, the ability to fully understand a form and accurately interpret its geometry is crucial [4]. Design engineers use CAD tools to conceptualize, design, visualize, and validate certain parts of the design, but the inherent limitations of displaying 3D geometry on a 2D screen make it difficult to quickly and fully understand the product features [5].

Many researchers agree that the success of a product is highly dependent on the evaluation process that is conducted throughout design, even at the early stages [6]. Both the designer (who can test aesthetic, functional, technical, and performance aspects of the product) and the end user (who can reveal design errors or misinterpretations of the initial requirements) are part of the evaluation process [7]. Depending on the testing purpose, we can choose among various types of prototypes: visual prototypes, which simulate the final aesthetics of the product (e.g., sketches or photorealistic renders); form prototypes, which highlight the shape and size of the product (and are generally made with rapid prototyping or hand-made techniques); functional prototypes, which simulate product performance and can be correctly assessed using a CAD model; fully physical prototypes, which simulate product final design, aesthetics, materials, and functionality [7].

In this regard, physical prototyping is the most common method to obtain user feedback, but traditional physical prototyping techniques may involve large financial and time investments, and costs can increase even higher when modifications and adjustments need to be made to the prototypes. Because of this, understanding the utility of different product representation methods can help reduce time and cost (i.e., sketches, 3D representations, or physical prototypes) [8]. During the design process, fidelity of the prototype may vary depending on the stage of product development [9] and depends on the tests purpose [10], as manufacturing high-fidelity prototypes has a significant impact on product development costs. This factor may significantly affect the user’s confidence and accuracy in their product evaluation [8].

Recent advances in visualization technologies have made virtual prototyping and effective and sustainable tool for design evaluation [11,12], especially during the early stages of development where many design variations must be produced [13]. These technologies enable the creation of high-fidelity geometric representations in a rapid and cost-effective manner for the evaluation of product aesthetics, ergonomics, and usability aspects [7]. However, certain product features can be difficult to evaluate when using virtual prototyping techniques, particularly if physical interaction with the product is required [9]. In design evaluation, a key advantage of physical prototyping over virtual prototyping is the ability to feel and interact with the physical product [14]. Indeed, the sense of touch is critical when evaluating a product because it provides a direct way to obtain information that could not be acquired otherwise [15]. Some new technologies are attempting to fill this gap in virtual environments.

The term “haptics” refers to the artificial forces between virtual objects and the user’s body. They are commonly classified as passive or active. Active haptics are controlled by a computer, and forces can be dynamically manipulated to provide a wide range of feelings for simulated virtual objects [16]. Although some authors have argued that active haptic devices can be a good option for haptic feedback [17], others have pointed out the large financial investment that is required and the need for a much larger workspace to fit some devices to the user’s hand [18]. Researchers have also mentioned the discomfort of wearing haptic devices which may reduce the feeling of presence and immersion in the virtual environment [19] and have a negative impact on the user’s emotional response.

Alternatively, passive haptics, provide a sense of touch in VR by synchronizing physical objects to virtual assets [20], significantly reducing costs without the use of intrusive devices, just the user's hands. This has been shown to increase the sense of presence, improve cognitive mapping of the environment, and improve training performance [21]. Interactions with VR with Passive Haptics setting (VRPH) can enhance the overall user experience along with the possibility of interacting and modifying the virtual setting in real-time (e.g., the environment or the product aesthetics). Although a main drawback of passive haptics has traditionally been the absence of the hands in the virtual environment, low-cost VR devices such as the Oculus Quest have introduced nonintrusive hand-tracking in their virtual experiences, where people can see a virtual representation of their hands in the digital environment that is updated in real-time. Although proper calibration is required to ensure the virtual hands are displayed correctly in the virtual environment, effective experiences can be delivered at a very low cost.

The use of VR has proven to be an effective alternative for product evaluation during the early stages of development. However, its value is predicated on the assumption that our perceptual and emotional responses to a product that is perceived through a VR environment are similar to those elicited by the actual product, which is not necessarily the case [22,23], particularly when evaluating product features that rely heavily on our sense of touch [2426]. Although some researchers have incorporated mechanisms to simulate the sense of touch in VR to make product evaluations more accurate, the effect of integrating nonintrusive virtual models of the user's hands have not yet been considered.

In this paper, we report the results of an experiment where a group of participants were asked to evaluate three designs of a product (i.e., umbrella stands) in three visual media: a real setting (R), a virtual reality environment (VR), and a Virtual Reality environment with Passive Haptics (VRPH). Participants were also asked to indicate their intended purchasing decision and to rate their level of confidence in their response. We considered the user’s background (trained in design versus not trained in design) to determine the influence on product evaluation and the existence (or lack thereof) of a combined effect between this factor and the medium used to view the product. Finally, we examined how the use of a physical item can affect product perception not only by helping users understand the function of the product under evaluation but also by providing interaction opportunities to generate an experience in which users feel more engaged [27].

Background

According to Steuer [28], VR is “a real or simulated environment in which a perceiver experiences telepresence.” It can also be defined as a “computer-generated digital environment that can be experienced and interacted with as if that environment were real” [16]. Although VR technologies have been around for years, recent advances have made them more affordable. VR has been successfully applied in a variety of areas, such as entertainment [29], healthcare [30], education [31], psychology and marketing [32], architecture [33], and industrial design and product development [14,34].

In design disciplines, market competitiveness and saturation have driven companies to emphasize product attributes that address meaning and emotion [35], and how these can be expressed to positively influence the user's experience, as well as how consumers think, feel, and act [36]. In current markets, the added value enabled by these attributes can significantly influence consumer choice [37]. Although critical, innovation is not the only factor for product success. In fact, some of the most innovative products fail when they reach the market [32,33]. The subtle differences between products in terms of technical characteristics, quality, and price, often make product differentiation a difficult task. In this context, consumer emotions play an important role [38] and can significantly influence the purchasing decision [39].

Various models have been proposed to characterize product emotion [40]. Jordan proposed an approach with four different pleasure categories [41]: physical (pleasures deriving from sensory organs), social (pleasures deriving from relationships with others), psychological (pleasures related to people’s cognitive and emotional reactions) and ideological (pleasures related to people’s values). Alternatively, Desmet [42] applied cognitive appraisal theory to explain the process of product emotion, and Norman [43] explained product emotion through a neurobiological emotion-framework that distinguishes several levels of information processing: visceral, behavioral, and reflective.

Numerous studies have examined the influence of the presentation medium on product evaluation (e.g., 2D images, interactive 3D models, AR, or VR). Söderman offered some initial insights with a study that examined perceptual differences elicited by a car when viewed in a nonimmersive VR environment and as a set of sketches, compared to reality [44]. Reid, MacDonald and Du performed a similar experiment with different 2D visual media [45], observing that the medium used to present the product could influence customer judgments. Their conclusion was confirmed by authors Artacho-Ramírez et al., who conducted an experiment [22] where they used two models of loudspeakers in five different media (photographs, static image, 3D models navigable with a computer mouse, and 3D model navigable with stereoscopic images) and compared user evaluations with the corresponding real products. Their findings showed that the type of representation significantly influences product perception, which was corroborated in Refs. [9,46,47] in their studies.

Despite the fact that researchers have investigated the influence of media into product evaluation [2,48], only recently have they begun to incorporate immersive VR headsets (e.g., Oculus Rift or HTC Vive) in their experiments [26,49]. For example, Galán et al. [25] made one of the first contributions using VRPH. Although users were not able to see their hands, their results demonstrated that the medium used to display the product influences how users perceive it, and that the use of VRPH settings could influence the product assessment positively. They also found that certain product features are more sensitive to perceptual differences, such as those related to the sense of touch. The authors also proposed a division of the semantic differential into Jordan's pleasure categories to obtain more specific results. They concluded that Jordan’s approach was an effective method to evaluate product perception. For this reason, we have adopted this approach in our study.

Researchers have also explored the influence of the user’s background on product evaluation when using different visual media [50]. Design training on product evaluation or the fine arts has also been extensively studied [51]. Solso explored the influence of user expertise on brain activity [52], concluding that trained participants employed more higher-level cognitive abilities than untrained participants. Other authors have suggested that participants trained in design disciplines are more capable of identifying the relationships between the composition of an entire form and its elements [53,54]. We speculate that this factor may also have an impact on the perception of a product and should be considered when using VR, as there could be a combined effect between background and medium that affects product assessment.

Research Goals and Hypotheses

The main goal of our study was to determine whether the medium used to present a product influence how it is perceived and evaluated, and whether the integration of haptics can help reduce perceptual differences. To this end, we selected three different designs of a product (i.e., an umbrella stand) and asked a group of participants to assess the products as presented in R, VR, and VRPH using the semantic differential technique.

Some studies suggest that a simple product representation provides enough information to perform a reasonable evaluation [55]. Therefore, the purchasing decision is not necessarily influenced by the change of medium. For this, users were asked to indicate their intended purchasing decision for each product. Our first hypothesis is stated as:

  • H1: The purchasing decision is not affected by the change of medium.

     Although similar studies have shown that Jordan’s physiological-pleasure category is the most affected by the change of medium [24], other authors have argued that other categories can be significantly influenced by the medium [23]. Therefore, the following hypothesis is stated:

  • H2: The visual medium used to present a product influences user perception regardless of the classification in Jordan’s category.

     Furthermore, some researchers have claimed that haptic feedback can affect the users’ confidence in their evaluations [56,57]. Our third hypothesis is stated as:

  • H3: The level of Confidence in the Response is affected by the change of visual medium.

     Another factor that has not been sufficiently studied is the influence of the user's background on product assessment. Some studies have reported that in product evaluation individuals with a design-related background tend to employ more higher-level cognitive abilities than untrained individuals [9]. These differences may influence the evaluation of the product. Our fourth hypothesis is states as:

  • H4: User expertise and design background affects product evaluation.

     Finally, in some product evaluation scenarios, it is necessary to use a complementary item to correctly understand how a product performs. Since the use of complementary items (when needed) is not feasible in online channels or even in some physical stores, an important part of the information of the product is lost. We considered this factor as an interesting research goal, as more information can improve product evaluation [8]. Our fifth hypothesis is stated as:

  • H5: The use of a complementary item to assess a product influences the evaluation of the product semantic scales.

Materials and Methods

To test our hypotheses, we structured our study into two analyses A1 and A2. In the first analysis (A1), we compared the user's perception of three umbrella stand designs as presented in three different settings (R, VR, and VRPH) and determined the influence of the user’s previous design expertise on product assessment. In the second analysis (A2), we examined the perceptual differences in product evaluation elicited by the presence or absence of a complementary physical item (i.e., an umbrella) in two different media (R and VR).

A total of 91 users participated in our experiment. All users gave informed consent to participate in our study, which was approved by the CEENB-OMGs section of the Bioethics Committee of the University of Cadiz (Ref. 006/202). To describe our experimental conditions, we next discuss (1) our case study, (2) materials, (3) the semantic scale established for the evaluation of the different products, (4) the description and justification of the sample, and (5) the protocol followed for the development of the case study.

Case Study.

The different umbrella stands selected for our study are shown in Figs. 2.2 and 2.3 (umbrella stand A is white and is located in the center, umbrella stand B is soft yellow and is located on the right, and umbrella stand C is black and is located on the left). We selected this type of product for three main reasons: (1) it is an easy-to-use product, which avoids frustration and prevents it from affecting our results [9]; (2) thanks to the geometry of the design options selected, hand-tracking was not lost, which guaranteed a consistently good experience; and (3) it lends itself to the use of a complementary item (i.e., un umbrella), as this is one of the research goals of our study.

Although similar studies have limited the number of options, we decided to use three prototypes to obtain more general results, since more products meant an increase in the time of the experiment, and people's fatigue could affect the evaluation.

Each stand has an opening to accommodate both long and short umbrellas (at least four) and is equipped with a water tray. All three models had neutral colors to mitigate potential perceptual differences elicited by color [58]. We note that umbrella stand B was displayed in the university hall for 30 days as a part of a student’s product design exhibition. For A2, two different umbrellas were selected (a long and a short umbrella), as shown in Fig. 1. The three umbrella stands were used for both analyses, but the umbrellas were only used in A2 to determine the impact of user-object interaction in the perception of the umbrella stands.

Fig. 1
Umbrellas used for the second case study
Fig. 1
Umbrellas used for the second case study
Close modal

For the three experimental conditions (the viewing media), each product was arranged in the same manner and in identical spaces. Since only one physical model of each umbrella stand was available, two physical rooms were built to present the three scenarios: one empty room and one with the physical products, as shown in Fig. 2. The interior area of the rooms (both physical and virtual) was 5 m2. Each physical room was built using seven movable panels positioned contiguously and attached to a 3-m-long wall. The scenarios are described below:

Fig. 2
(1) Exterior view of the rooms, (2) interior as perceived by users in R, and (3) interior as perceived by users in VR and VRPH
Fig. 2
(1) Exterior view of the rooms, (2) interior as perceived by users in R, and (3) interior as perceived by users in VR and VRPH
Close modal

1—Scenario 1 (R): This room consisted in a real environment with the products on the floor. Users were able to view and touch the real objects, but not allowed to change their position. For A2, two physical umbrellas were placed on a small table. The user was allowed to grab and move the umbrellas to interact with the umbrella stands.

2—Scenario 2 (VR). This room consisted in a VR simulated environment on an HMD. Oculus Quest hand-tracking interaction was enabled, so the users were able to see a nonintrusive virtual representation of their hands in real-time while interacting with the products as no external devices were used apart from the HDM (Fig. 3), which allowed a more natural interaction with the objects and the environment. The products were anchored to the virtual floor, so they could not be moved. In the case of the A2, virtual replicas of the two umbrellas were added to the scene and placed on a virtual table. The umbrellas could be grabbed and moved to interact with the umbrella stands.

Fig. 3
User touching an umbrella stand and grabbing the small umbrella
Fig. 3
User touching an umbrella stand and grabbing the small umbrella
Close modal

3—Scenario 3 (VRPH): This scenario consisted in a VR environment displayed on an HMD where the position of the virtual products was synchronized with the position of real products, so haptic feedback was enabled. In this scenario, the physical and virtual products were fixed to the floor. Hand-tracking interaction was enabled, so the users were able to see a nonintrusive virtual representation of their hands in real-time while interacting with the products.

It is important to note that scenarios 1 and 3 were displayed in the same physical room (as shown in Fig. 2.1), but never at the same time since only one participant was allowed to be present during the evaluation of the products in one medium. The use of the physical room alternated between the two scenarios.

Materials.

The VR environments were experienced using an Oculus Quest 2 HMD upgraded to version 36.0, a stand-alone immersive VR device with a single fast-switch LCD display of 1832 × 1920 pixels per eye and a refresh rate of 72 Hz. The virtual environment was designed using Unity 2020.3.11f1. We used the Oculus Integration asset (version 36.0) and HPTK Posing and Snapping 2.0.0. asset for the hand-tracking interaction (as the Oculus Interaction SDK was not available when the experiment was run). The Passthrough Capability was enabled for the calibration of the virtual objects before starting the experiment, and the hand-tracking interaction capability was enabled to provide the nonintrusive virtual models of the user’s hands. The scene used a real-time light with hard shadows enabled, and materials were built using a Standard Shader. The virtual objects were modeled in SolidWorks 2020, and UV mapping was completed in Blender 2.93.0. Hygiene and disinfection material were also provided to ensure optimal sanitary conditions.

For product evaluation, the participants completed a questionnaire comprised of 12 semantic differential scales. Participants were asked to indicate their intended purchasing decision with a “Yes” or “No” answer and rate their level of confidence in their response. To measure user presence in the VR environments, we used the Slater-Usoh-Steed (SUS) presence questionnaire [59]. This instrument is comprised of six 7-point Likert scale questions where a higher score indicates higher levels of presence.

Semantic Differential.

The semantic differential [60] approach is a common method for product evaluation [55,61] and an effective procedure to obtain consumer opinion and preferences. It is composed of Likert-type scales of 5 or 7 points that typically use bipolar pairs of adjectives to describe the product that is being evaluated. In our study, we used the procedure by Hsu et al. [55] to generate the semantic space for our products, as described next.

Information was collected from four different sources (professional designers, design-related users, average users, and manufacturers) to match a general criterion. The sample included 28 volunteers (20 female and 8 male) with an average age of 33.4 years. The group of professional designers (with at least five years of experience) consisted of 8 people; the design-related users’ group (people with a design background, such as industrial design students, design researchers or professors) was comprised of 12 participants; the average user’s group consisted of eight volunteers. Product adjectives were selected from various sites such as Ikea, Amazon, B-line, Systemtronic, or Mox.

Participants were shown a set of 12 images of various umbrella stands and asked to evaluate each product using an online form. Next, we conducted a keyword analysis. First, the frequency with which each adjective was repeated was counted, and those with the same meaning were grouped together (e.g., “big” and “large”). Antonyms were also grouped to build the most frequent bipolar pairs of adjectives (for terms where no antonym was available, they were added by the authors to complete the definition of the bipolar pair). We classified the bipolar pairs based on the four categories of pleasure defined by Jordan [62], and the two most frequent bipolar pairs were selected for each category, which resulted in a total of eight bipolar pairs of adjectives, as shown in Table 1.

Table 1

List of bipolar pairs of selected adjectives classified by Jordan’s pleasure categories

PhysioPsychoSocioIdeo
Light–HeavySimple–ComplexAttractive–UnattractiveInexpensive–Expensive
Large–SmallPractical–ImpracticalTraditional–ModernElegant–Ordinary
Stable–UnstableFunctional–DecorativeMinimalist–OverelaboratedCommon–Original
PhysioPsychoSocioIdeo
Light–HeavySimple–ComplexAttractive–UnattractiveInexpensive–Expensive
Large–SmallPractical–ImpracticalTraditional–ModernElegant–Ordinary
Stable–UnstableFunctional–DecorativeMinimalist–OverelaboratedCommon–Original

Sample.

Two different samples were used in our study. To estimate the minimum sample size, we performed an a priori power analysis with G*Power [1]. For A1, a repeated measures ANOVA test was applied with the following input parameters: effect size: .25, α = .05, (1−β) = .80 and 1 group. For A2, a repeated measure ANOVA (within-between interaction) was applied with the following input parameters: effect size: .25, α = .05, (1−β) = .80, two groups. Our results estimated a total sample size of 28 in both cases.

Sample for A1 Analysis.

Prior to data processing, an outlier study was conducted to obtain more robust and reliable results. Users with low confidence levels in their responses (a score with a mean value < .60) in at least one medium for each umbrella stand who also appeared as an outlier in the semantic differential scale data set were excluded from the analysis. Our final sample size was 58 users, so a power of .80 was guaranteed.

The mean age of the sample was 20.9 years old. The sample consisted of 39 men and 19 women, where 41 of the volunteers were involved in industrial design disciplines either academically and/or professionally. Approximately half of the participants (50.8%) had no experience with VR before participating in the study, 37.3% of the participants claimed to have limited experience with VR, and only 12% rated their experience with VR as significant. 6.7% of the participants used VR environments frequently and 3.4% very frequently. Finally, 55.9% of the participants stated that they have visual problems. 52.5% reported having myopia, 11.8% astigmatism and 3.4% hyperopia and other problems. 35.5% wore glasses and 18.6% wore contact lenses.

Sample for A2 Analysis.

Since the goal of this analysis was to compare the user's perception of a product in the presence or absence of a complementary item, some participants from A1 were also included in A2. A total of 23 participants who completed product evaluations in R and VR media (regardless of order) in the first or second place were randomly selected. This way, we ensured that the evaluation of products through VRPH did not affect our results. In addition, 23 new participants evaluated the products through R and VR using the complementary item.

The 46 participants in A2 guarantee a power of .80. The mean age was 20.63 years (56.5% were men and 43.5% were women); 56.5% were involved in industrial design disciplines either academically and/or professionally; 58.7% of the participants had never used VR devices before the study and 32.6% claimed to have vision problems; 54.4% reported myopia; 17.4% astigmatism and 6.5% hyperopia and other problems; 26.1% wore glasses and 26.1% contact lenses.

Experimental Protocol.

The complete procedure followed by the participants in our study is illustrated in Fig. 4, differentiating between those participating in A1 and A2. The experiment was conducted in five consecutive days (22.5 h). Twenty-seven 50-min shifts were established, allowing the participation of up to three users per shift. Participants accessed the area reserved for the experiment in pre-established groups. This area consisted of two physical rooms (described above) and a table with chairs reserved for completing the required documentation and questionnaires. Data for A1 was collected during the first four days, and for A2 on the last day.

Fig. 4
Complete cycle performed by each participant in both analyses
Fig. 4
Complete cycle performed by each participant in both analyses
Close modal

Before the experiment, participants were provided with the necessary information about the study and given the opportunity to ask questions. All participants gave informed consent. A member of the research team positioned the virtual objects for scenarios 2 and 3 using the Passthrough Capability of the Oculus Quest 2 ensuring an accurate overlay of the virtual and real objects for scenario 3.

Each participant viewed the products in the different proposed media. Before entering scenarios 2 and 3, a member of the research team confirmed that the user was correctly seeing the virtual models of their hands. After each medium, participants were asked to fill out the evaluation questionnaire before moving to the next experimental condition (Fig. 5.1). The order of the viewing media was randomized for each participant to minimize any potential unwanted effect on the results. Two participants completed the activities in each session, each accompanied by a member of the research team. Inside the room, participants were allowed to interact with the umbrella stands (Figs. 5.2 and 5.3). In the second study, participants were allowed to use two physical umbrellas to interact with the products (Fig. 5.4).

Fig. 5
(1) Participants filling out questionnaires, (2) participant experiencing the VR environment for A1, (3) participant experiencing the VRPH environment in A1, and (4) participant evaluating products in R using the umbrellas in A2
Fig. 5
(1) Participants filling out questionnaires, (2) participant experiencing the VR environment for A1, (3) participant experiencing the VRPH environment in A1, and (4) participant evaluating products in R using the umbrellas in A2
Close modal

Results

This section has been divided into two sections in terms of the analyses performed (A1 and A2). A schematic diagram is shown in Fig. 6 to facilitate the understanding of the section.

Fig. 6
Schematic flow diagram for the data analysis
Fig. 6
Schematic flow diagram for the data analysis
Close modal

Analysis A1 Results.

Four data sets were obtained: the (1) Semantic Scales, (2) Confidence in the Response, (3) Purchase Decision, and (4) Presence test scores. The descriptive statistics for Semantic Scales, Purchase Decision, and the Confidence in the Response are shown in Tables 24. Boxplots for the Semantic Scales are shown in Fig. 7. For the Semantic Scales, a value closer to −3 represents a closer correspondence with the adjective in italics, and a value closer to 3 indicates a closer correspondence with the adjective in bold. For the Purchase Decision and the Confidence in the Response, a value closer to 0 indicates “not buying the product” and a low Confidence in the Response, whereas a value closer to 1 represents “buying the product” and a high Confidence in the Response. Descriptive statistics for the SUS presence questionnaire were MVR = 4.701, MdnVR = 5, SDVR = 1.62 for VR, and MVRPH = 5.10, MdnVRPH = 5, SDVRPH = 1.63 for VRPH.

Fig. 7
Boxplots for the Semantic Scales
Fig. 7
Boxplots for the Semantic Scales
Close modal
Table 2

Descriptive statistics for the Semantic Scales

Semantic ScalesABC
RVRVRPHRVRVRPHRVRVRPH
PHYSIOHeavyLightM.36.16.03−.47.51.41.67.90.74
Mdn1.00.00.00−1.00−1.00−1.001.001.001.00
SD1.661.421.341.761.661.592.131.551.75
SmallLargeM0.21.16.172.222.001.98−1.74−1.88−1.66
Mdn.00.00.002.002.002.00−2.00−2.00−2.00
SD1.021.281.03.73.70.911.05.821.10
UnstableStableM1.471.431.38.97.851.28.76.67.88
Mdn2.002.002.002.001.002.001.001.001.00
SD1.371.301.311.571.421.361.661.421.46
PSYCHOSimpleComplexM.59.41−.55.33−.31.05−.40.83.38
Mdn−1.00.00−1.00−1.00.00.00−1.00−1.00−1.00
SD1.271.441.221.661.661.461.621.431.61
ImpracticalPracticalM1.07.661.021.531.531.52.79.520.91
Mdn1.001.001.002.002.002.001.001.001.00
SD1.151.181.191.301.251.221.441.311.33
DecorativeFunctionalM1.02.590.95.12.31.191.02.69.86
Mdn1.001.001.00.00.00.001.001.001.00
SD1.331.441.421.681.641.531.321.331.23
SOCIOUnattractiveAttractiveM.17.05.191.261.161.10.26.10.41
Mdn.50.00.501.001.001.00.00.001.00
SD1.601.541.551.321.361.391.431.551.35
TraditionalModernM.26.53.28.621.00.781.381.071.28
Mdn.001.001.001.002.001.002.001.501.00
SD1.381.561.451.761.731.641.471.571.17
MinimalistOverelaboratedM.76−.86.69−1.19−1.10−1.12−1.28−1.28−1.28
Mdn−1.00−1.00−1.00−1.00−1.00−1.00−1.00−1.00−1.00
SD1.051.251.311.291.471.351.20.9881.23
IDEOInexpensiveExpensiveM−.76.69.78.16.03−.05.53−.43.38
Mdn−1.00−1.00−1.00.00.00.00−1.00.00.00
SD1.141.191.241.251.211.231.501.271.50
OrdinaryElegantM.05.12−.051.291.101.10.53.36.47
Mdn.00.00.001.501.001.501.00.00.00
SD1.291.441.301.061.291.401.421.271.27
CommonOriginalM.28.14.281.481.101.50.76.69.67
Mdn.00.00.002.002.002.001.001.001.00
SD1.401.501.621.511.721.401.571.541.48
Semantic ScalesABC
RVRVRPHRVRVRPHRVRVRPH
PHYSIOHeavyLightM.36.16.03−.47.51.41.67.90.74
Mdn1.00.00.00−1.00−1.00−1.001.001.001.00
SD1.661.421.341.761.661.592.131.551.75
SmallLargeM0.21.16.172.222.001.98−1.74−1.88−1.66
Mdn.00.00.002.002.002.00−2.00−2.00−2.00
SD1.021.281.03.73.70.911.05.821.10
UnstableStableM1.471.431.38.97.851.28.76.67.88
Mdn2.002.002.002.001.002.001.001.001.00
SD1.371.301.311.571.421.361.661.421.46
PSYCHOSimpleComplexM.59.41−.55.33−.31.05−.40.83.38
Mdn−1.00.00−1.00−1.00.00.00−1.00−1.00−1.00
SD1.271.441.221.661.661.461.621.431.61
ImpracticalPracticalM1.07.661.021.531.531.52.79.520.91
Mdn1.001.001.002.002.002.001.001.001.00
SD1.151.181.191.301.251.221.441.311.33
DecorativeFunctionalM1.02.590.95.12.31.191.02.69.86
Mdn1.001.001.00.00.00.001.001.001.00
SD1.331.441.421.681.641.531.321.331.23
SOCIOUnattractiveAttractiveM.17.05.191.261.161.10.26.10.41
Mdn.50.00.501.001.001.00.00.001.00
SD1.601.541.551.321.361.391.431.551.35
TraditionalModernM.26.53.28.621.00.781.381.071.28
Mdn.001.001.001.002.001.002.001.501.00
SD1.381.561.451.761.731.641.471.571.17
MinimalistOverelaboratedM.76−.86.69−1.19−1.10−1.12−1.28−1.28−1.28
Mdn−1.00−1.00−1.00−1.00−1.00−1.00−1.00−1.00−1.00
SD1.051.251.311.291.471.351.20.9881.23
IDEOInexpensiveExpensiveM−.76.69.78.16.03−.05.53−.43.38
Mdn−1.00−1.00−1.00.00.00.00−1.00.00.00
SD1.141.191.241.251.211.231.501.271.50
OrdinaryElegantM.05.12−.051.291.101.10.53.36.47
Mdn.00.00.001.501.001.501.00.00.00
SD1.291.441.301.061.291.401.421.271.27
CommonOriginalM.28.14.281.481.101.50.76.69.67
Mdn.00.00.002.002.002.001.001.001.00
SD1.401.501.621.511.721.401.571.541.48

Note: For each umbrella stand (A, B, C) and Semantic Scale, the highest value for mean is shown in bold and the lowest one in italics.

Table 3

Descriptive statistics for the Confidence in the Response by Semantic Scale

Semantic ScalesABC
RVRVRPHRVRVRPHRVRVRPH
PHYSIOHeavyLightM.86.52.77.91.60.79.88.55.79
Mdn1.001.001.001.001.001.001.001.001.00
SD.35.50.42.28.49.41.33.50.41
SmallLargeM1.00.971.001.00.95.951.00.98.98
Mdn1.001.001.001.001.001.001.001.001.00
SD.00.18.00.00.22.22.00.13.13
UnstableStableM.95.72.90.90.76.95.90.69.84
Mdn1.001.001.001.001.001.001.001.001.00
SD.22.45.31.31.43.22.31.47.37
PSYCHOSimpleComplexM.98.97.95.97.93.98.95.91.95
Mdn1.001.001.001.001.001.001.001.001.00
SD.13.18.23.18.26.13.22.28.22
ImpracticalPracticalM.86.78.83.97.90.88.79.67.78
Mdn1.001.001.001.001.001.001.001.001.00
SD.35.42.38.18.31.33.41.47.42
DecorativeFunctionalM.95.83.84.95.88.84.90.66.76
Mdn1.001.001.001.001.001.001.001.001.00
SD.22.38.37.22.33.37.31.48.43
SOCIOUnattractiveAttractiveM1.00.97.95.95.98.97.97.91.95
Mdn1.001.001.001.001.001.001.001.001.00
SD.00.18.23.22.13.18.18.28.22
TraditionalModernM.90.97.90.97.98.93.97.90.91
Mdn1.001.001.001.001.001.001.001.001.00
SD.31.18.31.18.13.26.18.31.28
MinimalistOverelaboratedM.98.93.97.97.95.93.98.95.93
Mdn1.001.001.001.001.001.001.001.001.00
SD.13.26.19.18.22.26.13.22.26
IDEOInexpensiveExpensiveM.48.41.49.52.40.52.47.41.45
Mdn.00.00.001.00.001.00.00.00.00
SD.50.50.50.50.49.50.50.50.50
OrdinaryElegantM.90.91.97.97.91.97.89.84.90
Mdn1.001.001.001.001.001.001.001.001.00
SD.31.28.27.18.28.18.31.37.31
CommonOriginalM.91.85.91.93.91.95.86.81.84
Mdn1.001.001.001.001.001.001.001.001.00
SD.28.37.34.26.28.22.35.40.37
Semantic ScalesABC
RVRVRPHRVRVRPHRVRVRPH
PHYSIOHeavyLightM.86.52.77.91.60.79.88.55.79
Mdn1.001.001.001.001.001.001.001.001.00
SD.35.50.42.28.49.41.33.50.41
SmallLargeM1.00.971.001.00.95.951.00.98.98
Mdn1.001.001.001.001.001.001.001.001.00
SD.00.18.00.00.22.22.00.13.13
UnstableStableM.95.72.90.90.76.95.90.69.84
Mdn1.001.001.001.001.001.001.001.001.00
SD.22.45.31.31.43.22.31.47.37
PSYCHOSimpleComplexM.98.97.95.97.93.98.95.91.95
Mdn1.001.001.001.001.001.001.001.001.00
SD.13.18.23.18.26.13.22.28.22
ImpracticalPracticalM.86.78.83.97.90.88.79.67.78
Mdn1.001.001.001.001.001.001.001.001.00
SD.35.42.38.18.31.33.41.47.42
DecorativeFunctionalM.95.83.84.95.88.84.90.66.76
Mdn1.001.001.001.001.001.001.001.001.00
SD.22.38.37.22.33.37.31.48.43
SOCIOUnattractiveAttractiveM1.00.97.95.95.98.97.97.91.95
Mdn1.001.001.001.001.001.001.001.001.00
SD.00.18.23.22.13.18.18.28.22
TraditionalModernM.90.97.90.97.98.93.97.90.91
Mdn1.001.001.001.001.001.001.001.001.00
SD.31.18.31.18.13.26.18.31.28
MinimalistOverelaboratedM.98.93.97.97.95.93.98.95.93
Mdn1.001.001.001.001.001.001.001.001.00
SD.13.26.19.18.22.26.13.22.26
IDEOInexpensiveExpensiveM.48.41.49.52.40.52.47.41.45
Mdn.00.00.001.00.001.00.00.00.00
SD.50.50.50.50.49.50.50.50.50
OrdinaryElegantM.90.91.97.97.91.97.89.84.90
Mdn1.001.001.001.001.001.001.001.001.00
SD.31.28.27.18.28.18.31.37.31
CommonOriginalM.91.85.91.93.91.95.86.81.84
Mdn1.001.001.001.001.001.001.001.001.00
SD.28.37.34.26.28.22.35.40.37

Note: For each umbrella stand (A, B, C) and Semantic Scale, the highest value for the mean is shown in bold and the lowest one in italics.

Table 4

Descriptive statistics for the purchase decision and the Confidence in the Response by media

ABC
RVRVRPHRVRVRPHRVRVRPH
Purchase decisionM.34.31.41.67.74.76.52.43.53
Mdn.00.00.001.001.001.001.000.001.00
SD.48.47.50.47.44.43.50.50.50
Confidence in the ResponseM.90.82.87.92.85.89.88.77.84
Mdn1.001.001.001.001.001.001.001.001.00
SD.30.39.34.28.36.32.33.42.37
ABC
RVRVRPHRVRVRPHRVRVRPH
Purchase decisionM.34.31.41.67.74.76.52.43.53
Mdn.00.00.001.001.001.001.000.001.00
SD.48.47.50.47.44.43.50.50.50
Confidence in the ResponseM.90.82.87.92.85.89.88.77.84
Mdn1.001.001.001.001.001.001.001.001.00
SD.30.39.34.28.36.32.33.42.37

Note: For each umbrella stand (A, B, C) and data set, the highest value for the mean is shown in bold and the lowest one in italics.

A normality test was performed to select the appropriate statistical test to determine differences between means for data sets (1), (2), and (3). We used a Kolmogorov–Smirnov normality test (significance level of .05), as the sample size was >50. Our results showed that the data was not normally distributed, so parametric tests proved unsuitable. For the case of the semantic differential, we decided to apply the aligned rank transform (ART) procedure [63], a powerful and robust nonparametric alternative to other traditional techniques [64]. ART relies on a preprocessing step that “aligns” data before applying averaged ranks. After this, common ANOVA procedures can be applied. For the Confidence in the Response and the Purchase Decision, Q of Cochran test was used (dichotomous data).

The results of the two-factor repeated measures ANOVA test (with media as within subjects’ factor and user background as between subject’s factor) for the Semantic Scales are shown in Table 5. Statistically significant differences were found for “Small–Large” and “Decorative–Functional” adjectives for design A. The bipolar pair “Traditional–Modern” showed significant differences for B and C. Finally, design C showed significant differences for “Simple–Complex” and “Practical–Impractical.”

Table 5

Two-factor mixed ANOVA for the Semantic Scales with media and participant’s design expertise (Exp) as main factors

ABC
Semantic ScalesMediaExpMixedMediaExpMixedMediaExpMixed
Sig.Sig.Sig.Sig.Sig.Sig.Sig.Sig.Sig.
PHYSIOHeavy–Lightp = .080p = .178p = .050p = .677p = .682p = .637p = .957p = .875p = .290
Small–Largep = .002p = .683p = .687p = .234p = .807p = .581p = .144p = .511p = .510
Unstable–Stablep = .241p = .775p = .288p = .078p = .578p = .326p = .655p = .317p = .647
PSYCHOSimple–Complexp = .932p = .685p = .003p = .153p = .914p = .470p = .018p = .706p = .393
Impractical–Practicalp = .037*p = .569p = .271p = .112p = .736p = .256p = .001p = .233p = .098
Decorative–Functionalp = .004p = .215p = .128p = .218p = .129p = .586p = .285p = .669p = .002
SOCIOUnattractive–Attractivep = .467p = .055p = .792p = .641p = .210p = .470p = .042*p = .342p = .450
Traditional–Modernp = .724p = .009p = .559p = .014p = .226p = .319p = .046p = .081p = .532
Minimalist–Overelaboratedp = .545p = .275p = .075p = .104p = .795p = .774p = .161p = .897p = .364
IDEOInexpensive–Expensivep = .545p = .222p = .084p = .121p = .284p < .001p = .339p = .809p = .036
Ordinary–Elegantp = .193p = .038p = .049p = .662p = .305p = .270p = .220p = .687p = .585
Common–Originalp = .161p = .263p = .137p = .332p = .786p = .391p = .672p = .355p = .576
ABC
Semantic ScalesMediaExpMixedMediaExpMixedMediaExpMixed
Sig.Sig.Sig.Sig.Sig.Sig.Sig.Sig.Sig.
PHYSIOHeavy–Lightp = .080p = .178p = .050p = .677p = .682p = .637p = .957p = .875p = .290
Small–Largep = .002p = .683p = .687p = .234p = .807p = .581p = .144p = .511p = .510
Unstable–Stablep = .241p = .775p = .288p = .078p = .578p = .326p = .655p = .317p = .647
PSYCHOSimple–Complexp = .932p = .685p = .003p = .153p = .914p = .470p = .018p = .706p = .393
Impractical–Practicalp = .037*p = .569p = .271p = .112p = .736p = .256p = .001p = .233p = .098
Decorative–Functionalp = .004p = .215p = .128p = .218p = .129p = .586p = .285p = .669p = .002
SOCIOUnattractive–Attractivep = .467p = .055p = .792p = .641p = .210p = .470p = .042*p = .342p = .450
Traditional–Modernp = .724p = .009p = .559p = .014p = .226p = .319p = .046p = .081p = .532
Minimalist–Overelaboratedp = .545p = .275p = .075p = .104p = .795p = .774p = .161p = .897p = .364
IDEOInexpensive–Expensivep = .545p = .222p = .084p = .121p = .284p < .001p = .339p = .809p = .036
Ordinary–Elegantp = .193p = .038p = .049p = .662p = .305p = .270p = .220p = .687p = .585
Common–Originalp = .161p = .263p = .137p = .332p = .786p = .391p = .672p = .355p = .576

Note: Factor p value for each umbrella stand (A, B, C) and Semantic Scale in which perceptual differences were found are shown in bold. Values with * showed no differences in the post-hoc analysis.

Umbrella stand A was the only design for which the influence of the participant’s background was significant in some adjectives. Also, a combined factor of medium and background was obtained for “Simple–Complex” and “Ordinary–Elegant” for design A, for “Inexpensive–Expensive” for design B and C, and for “Decorative–Functional” for C.

To determine the conditions between which these differences were found for the factor of the presentation medium, we conducted pairwise comparisons for the Semantic Scales (Table 6) using the Tukey’s honestly significant difference (HSD) test. To use this test, each group must have approximately equal variance (homogeneity of variances). In our case, no between subject factors were specified, so the assumption is always met when using the Levene’s test to assess equality of variances.

Table 6

Post-hoc tests for the Semantic Scales

Semantic ScalesMediaABC
Sig.Sig.Sig.
PHYSIOSmall–LargeR–VRp = .009
R–VRPHp = .854
VR–VRPHp = .035
Unstable–StableR–VR
R–VRPH
VR–VRPH
PSYCHOSimple–ComplexR–VRp = .136
R–VRPHp = .580
VR–VRPHp = .035
Impractical–PracticalR–VRp = .085p = .164
R–VRPHp = .980p = .131
VR–VRPHp = .062p < .001
Decorative–FunctionalR–VRp = .010
R–VRPHp = .999
VR–VRPHp = .015
SOCIOUnattractive–AttractiveR–VRp = .538
R–VRPHp = .255
VR–VRPHp = .056
Traditional–ModernR–VRp = .021p = .040
R–VRPHp = .257p = .252
VR–VRPHp = .286p = .696
Semantic ScalesMediaABC
Sig.Sig.Sig.
PHYSIOSmall–LargeR–VRp = .009
R–VRPHp = .854
VR–VRPHp = .035
Unstable–StableR–VR
R–VRPH
VR–VRPH
PSYCHOSimple–ComplexR–VRp = .136
R–VRPHp = .580
VR–VRPHp = .035
Impractical–PracticalR–VRp = .085p = .164
R–VRPHp = .980p = .131
VR–VRPHp = .062p < .001
Decorative–FunctionalR–VRp = .010
R–VRPHp = .999
VR–VRPHp = .015
SOCIOUnattractive–AttractiveR–VRp = .538
R–VRPHp = .255
VR–VRPHp = .056
Traditional–ModernR–VRp = .021p = .040
R–VRPHp = .257p = .252
VR–VRPHp = .286p = .696

Note: P values for each umbrella stand (A, B, C) and Semantic Scale showing perceptual differences are shown in bold.

To determine differences in the Confidence in the Response, we performed the Q of Cochran test for nonparametric dichotomous data. Our results (Table 7) showed that the level of confidence is affected by the medium for adjectives “Heavy–Light” and “Unstable–Stable” for each product. Statistically significant differences were also found for Umbrella stand C for “Decorative–Functional”. Post-hoc tests are shown in Table 8.

Table 7

Q of Cochran for the level of Confidence in the Response

Semantic differentialsdfABC
QSig.QSig.QSig.
PHYSIOHeavy–Light226.000p < .00122.455p < .00124.250p < .001
Small–Large4.000p = .1254.500p = .1051.000p = .607
Unstable–Stable14.623p = .00113.857p = .00113.000p = .002
PSYCHOSimple–Complex1.200p = .5494.667p = .097.800p = .670
Impractical–Practical2.111p = .3483.500p = .1743.600p = .165
Decorative–Functional5.733p = .0575.091p = .07811.043p = .004
SOCIOUnattractive–Attractive3.500p = .1741.200p = .5491.400p = .497
Traditional–Modern2.909p = .2342.333p = .3112.889p = .236
Minimalist–Overelaborated2.800p = .247.857p = .6513.600p = .165
IDEOInexpensive–Expensive2.211p = .3314.455p = .108.824p = .662
Ordinary–Elegant1.273p = .5292.250p = .3251.385p = .500
Common–Original2.167p = .338.857p = .6511.385p = .500
Semantic differentialsdfABC
QSig.QSig.QSig.
PHYSIOHeavy–Light226.000p < .00122.455p < .00124.250p < .001
Small–Large4.000p = .1254.500p = .1051.000p = .607
Unstable–Stable14.623p = .00113.857p = .00113.000p = .002
PSYCHOSimple–Complex1.200p = .5494.667p = .097.800p = .670
Impractical–Practical2.111p = .3483.500p = .1743.600p = .165
Decorative–Functional5.733p = .0575.091p = .07811.043p = .004
SOCIOUnattractive–Attractive3.500p = .1741.200p = .5491.400p = .497
Traditional–Modern2.909p = .2342.333p = .3112.889p = .236
Minimalist–Overelaborated2.800p = .247.857p = .6513.600p = .165
IDEOInexpensive–Expensive2.211p = .3314.455p = .108.824p = .662
Ordinary–Elegant1.273p = .5292.250p = .3251.385p = .500
Common–Original2.167p = .338.857p = .6511.385p = .500

Note: P values for each umbrella stand (A, B, C) and Semantic Scale showing perceptual differences are shown in bold.

Table 8

Post-hoc tests for level of Confidence in the Response for each bipolar pair of adjectives

Semantic scalesMediaABC
Sig.Sig.Sig.
PHYSIOHeavy–LightR–VRp < .001p < .001p < .001
R–VRPHp = .180p = .039p = .227
VR–VRPHp = .001p = .007p = .001
PSYCHOUnstable–StableR–VRP = .002P = .057P = .002
R–VRPHp = .250p = .250p = .453
VR–VRPHp = .013p = .001p = .035
SOCIODecorative–FunctionalR–VRp = .001
R–VRPHp = .039
VR–VRPHp = .238
Semantic scalesMediaABC
Sig.Sig.Sig.
PHYSIOHeavy–LightR–VRp < .001p < .001p < .001
R–VRPHp = .180p = .039p = .227
VR–VRPHp = .001p = .007p = .001
PSYCHOUnstable–StableR–VRP = .002P = .057P = .002
R–VRPHp = .250p = .250p = .453
VR–VRPHp = .013p = .001p = .035
SOCIODecorative–FunctionalR–VRp = .001
R–VRPHp = .039
VR–VRPHp = .238

Note: P values for each umbrella stand (A, B, C) and Semantic Scale showing perceptual differences are shown in bold.

To examine the influence of the participant’s background on the Confidence in the Response, we report the descriptive statistics of the two groups of participants: design-trained (T) and nondesign-trained (NT). The T group was comprised of 41 people, whereas NT was comprised of 17 participants. Results are shown in Table 9.

Table 9

Descriptive statistics for the Confidence in the Response by participant’s expertise in design

ABC
RVRVRPHRVRVRPHRVRVRPH
DTNDTDTNDTDTNDTDTNDTDTNDTDTNDTDTNDTDTNDTDTNDT
M.89.92.83.79.87.89.92.91.86.81.88.91.86.92.76.80.81.91
Mdn1.0011.0011.0011.001.001.001.001.001.001.001.001.001.001.001.00
SD.31.28.38.41.35.31.28.28.35.39.33.29.34.28.42.40.39.29
ABC
RVRVRPHRVRVRPHRVRVRPH
DTNDTDTNDTDTNDTDTNDTDTNDTDTNDTDTNDTDTNDTDTNDT
M.89.92.83.79.87.89.92.91.86.81.88.91.86.92.76.80.81.91
Mdn1.0011.0011.0011.001.001.001.001.001.001.001.001.001.001.001.00
SD.31.28.38.41.35.31.28.28.35.39.33.29.34.28.42.40.39.29

Finally, no significant differences were found for the intended purchasing decision, as indicated by Cochran’s Q test (χ2(2) = 4.67, pA < .097), (χ2(2) = 4.20, pB < .122), (χ2(2) = 3.20, pC < .212).

Analysis A2.

To determine the influence of the use of a complementary item, i.e., a physical umbrella, on the evaluation of each umbrella stand, a two-factor mixed ANOVA with two degrees of freedom was performed, with the secondary element (Obj) as the between-subjects factor, and the presentation medium as the within-subjects factor (Table 11). Descriptive statistics for the Semantic Scales are shown in Table 10.

Table 10

Descriptive statistics for the semantic differential organized by presence (U) or absence (NU) of the umbrella

ABC
RVRRVRRVR
Semantic scalesUNUUNUUNUUNUUNUUNU
Heavy–LightM−.30.39.30.65.43−.39.61−.171.00.96.35.91
Mdn−1.00.001.001.00−1.00−1.00.00.001.002.00.001.00
SD1.581.441.151.402.091.851.731.672.042.121.751.59
Small–LargeM.13.35.30−.172.261.962.131.70−2.39−1.52−1.96−1.52
Mdn.00.00.00.002.002.002.002.00−3.00−2.00−2.00−2.00
SD1.25.781.361.07.92.711.06.70.721.341.02.79
Unstable–StableM1.131.781.431.261.78.611.09.39.83.65.781.13
Mdn1.002.002.002.002.001.001.001.002.001.001.001.00
SD1.601.281.441.391.481.641.501.231.951.701.781.32
Simple–ComplexM−1.43−.35−1.74−.26.00.26.09−.04.52−.26.78−.39
Mdn−2.00.00−2.00.00p.001.00.00.00.00−1.00−1.00−1.00
SD1.441.231.051.321.541.791.681.361.271.681.781.62
Impractical–PracticalM.131.131.09.302.391.302.041.22.87.83.78.83
Mdn.002.001.001.003.002.002.002.002.001.001.001.00
SD1.711.391.311.36.721.551.221.282.031.401.781.07
Decorative–FunctionalM.87.83.96.22.52.13.13.001.171.131.09.91
Mdn1.001.001.00.001.00.00.00.002.001.002.001.00
SD1.391.591.581.441.441.601.391.451.641.181.731.16
Unattractive–AttractiveM.13.61.13.301.481.391.781.17.26.35.22.17
Mdn.001.00.00.002.001.002.001.00.00.00.00.00
SD1.741.591.361.461.411.081.381.151.391.371.201.61
Traditional–ModernM.35.30.65.65.91.571.261.091.391.611.09.91
Mdn.00.001.001.001.001.001.002.002.002.001.001.00
SD1.721.291.431.501.682.021.451.831.371.441.241.73
Minimalist–OverelaboratedM−1.22.65−1.52.83−1.61.91−1.65−1.17−1.96−1.70−1.57−1.39
Mdn−1.00−1.00−2.00−1.00−2.00.00−2.00−1.00−2.00−2.00−2.00−2.00
SD1.201.11.991.341.271.411.341.40.88.97.99.84
Inexpensive–ExpensiveM.91.87−1.04.70.43.04.61.22.26.57.70.39
Mdn−1.00−1.00−1.00−1.00.00.001.00.00.00−1.00−1.00.00
SD.901.221.261.021.161.19.991.131.361.501.151.08
Ordinary–ElegantM.09.00.35.041.611.481.431.22.83.70.70.30
Mdn.00.00.00.002.002.002.001.001.001.001.00.00
SD1.651.311.111.331.27.99.901.311.341.291.521.22
Common–OriginalM.78−.61.74−.172.041.702.041.35.96.91.57.65
Mdn−1.00−1.00−1.00.002.002.002.002.002.001.001.001.00
SD1.681.501.451.271.071.15.881.611.801.531.341.37
ABC
RVRRVRRVR
Semantic scalesUNUUNUUNUUNUUNUUNU
Heavy–LightM−.30.39.30.65.43−.39.61−.171.00.96.35.91
Mdn−1.00.001.001.00−1.00−1.00.00.001.002.00.001.00
SD1.581.441.151.402.091.851.731.672.042.121.751.59
Small–LargeM.13.35.30−.172.261.962.131.70−2.39−1.52−1.96−1.52
Mdn.00.00.00.002.002.002.002.00−3.00−2.00−2.00−2.00
SD1.25.781.361.07.92.711.06.70.721.341.02.79
Unstable–StableM1.131.781.431.261.78.611.09.39.83.65.781.13
Mdn1.002.002.002.002.001.001.001.002.001.001.001.00
SD1.601.281.441.391.481.641.501.231.951.701.781.32
Simple–ComplexM−1.43−.35−1.74−.26.00.26.09−.04.52−.26.78−.39
Mdn−2.00.00−2.00.00p.001.00.00.00.00−1.00−1.00−1.00
SD1.441.231.051.321.541.791.681.361.271.681.781.62
Impractical–PracticalM.131.131.09.302.391.302.041.22.87.83.78.83
Mdn.002.001.001.003.002.002.002.002.001.001.001.00
SD1.711.391.311.36.721.551.221.282.031.401.781.07
Decorative–FunctionalM.87.83.96.22.52.13.13.001.171.131.09.91
Mdn1.001.001.00.001.00.00.00.002.001.002.001.00
SD1.391.591.581.441.441.601.391.451.641.181.731.16
Unattractive–AttractiveM.13.61.13.301.481.391.781.17.26.35.22.17
Mdn.001.00.00.002.001.002.001.00.00.00.00.00
SD1.741.591.361.461.411.081.381.151.391.371.201.61
Traditional–ModernM.35.30.65.65.91.571.261.091.391.611.09.91
Mdn.00.001.001.001.001.001.002.002.002.001.001.00
SD1.721.291.431.501.682.021.451.831.371.441.241.73
Minimalist–OverelaboratedM−1.22.65−1.52.83−1.61.91−1.65−1.17−1.96−1.70−1.57−1.39
Mdn−1.00−1.00−2.00−1.00−2.00.00−2.00−1.00−2.00−2.00−2.00−2.00
SD1.201.11.991.341.271.411.341.40.88.97.99.84
Inexpensive–ExpensiveM.91.87−1.04.70.43.04.61.22.26.57.70.39
Mdn−1.00−1.00−1.00−1.00.00.001.00.00.00−1.00−1.00.00
SD.901.221.261.021.161.19.991.131.361.501.151.08
Ordinary–ElegantM.09.00.35.041.611.481.431.22.83.70.70.30
Mdn.00.00.00.002.002.002.001.001.001.001.00.00
SD1.651.311.111.331.27.99.901.311.341.291.521.22
Common–OriginalM.78−.61.74−.172.041.702.041.35.96.91.57.65
Mdn−1.00−1.00−1.00.002.002.002.002.002.001.001.001.00
SD1.681.501.451.271.071.15.881.611.801.531.341.37

Note: For each umbrella stand (A, B, C) and Semantic Scale, the highest value for the mean is shown in bold.

Table 11

Two-factor mixed ANOVA for the Semantic Scales with medium and complementary item (Obj) as main factors

ABC
Semantic scalesMediaObjMixedMediaObjMixedMediaObjMixed
Sig.Sig.Sig.Sig.Sig.Sig.Sig.Sig.Sig.
PHYSIOHeavy–Lightp = .096p = .104p = .496p = .013p = .488p = .118p = .081p = .500p = .323
Small–Largep = .034p = .392p = .452p = .153p = .009p = .583p = .056p = .012p = .606
Unstable–Stablep = .610p = .643p = .057p = .013p = .004p = .398p = .583p = 1.000p = .377
PSYCHOSimple–Complexp = .936p < .001p = .586p = .266p = .758p = .603p = .339p = .439p = .619
Impractical–Practicalp = .771p = .679p < .001p = .164p = .004p = .962p = .675p = .360p = .794
Decorative–Functionalp = .333p = .242p = .129p = .530p = .226p = .284p = .482p = .476p = .758
SOCIOUnattractive–Attractivep = .734p = .345p = .143p = .750p = .166p = .061p = .078p = .589p = .392
Traditional–Modernp = .052p = .739p = .731p = .046p = .771p = .577p = .003p = .743p = .373
Minimalist–Overelaboratedp = .167p = .075p = .923p = .266p = .113p = .391p = .044p = .196p = .931
IDEOInexpensive–Expensivep = .992p = .432p = .346p = .169p = .135p = .905p = .559p = .976p = .069
Ordinary–Elegantp = .642p = .975p = .618p = .113p = .513p = .845p = .245p = .380p = .342
Common–Originalp = .181p = .266p = .406p = .269p = .156p = .582p = .064p = .949p = .547
ABC
Semantic scalesMediaObjMixedMediaObjMixedMediaObjMixed
Sig.Sig.Sig.Sig.Sig.Sig.Sig.Sig.Sig.
PHYSIOHeavy–Lightp = .096p = .104p = .496p = .013p = .488p = .118p = .081p = .500p = .323
Small–Largep = .034p = .392p = .452p = .153p = .009p = .583p = .056p = .012p = .606
Unstable–Stablep = .610p = .643p = .057p = .013p = .004p = .398p = .583p = 1.000p = .377
PSYCHOSimple–Complexp = .936p < .001p = .586p = .266p = .758p = .603p = .339p = .439p = .619
Impractical–Practicalp = .771p = .679p < .001p = .164p = .004p = .962p = .675p = .360p = .794
Decorative–Functionalp = .333p = .242p = .129p = .530p = .226p = .284p = .482p = .476p = .758
SOCIOUnattractive–Attractivep = .734p = .345p = .143p = .750p = .166p = .061p = .078p = .589p = .392
Traditional–Modernp = .052p = .739p = .731p = .046p = .771p = .577p = .003p = .743p = .373
Minimalist–Overelaboratedp = .167p = .075p = .923p = .266p = .113p = .391p = .044p = .196p = .931
IDEOInexpensive–Expensivep = .992p = .432p = .346p = .169p = .135p = .905p = .559p = .976p = .069
Ordinary–Elegantp = .642p = .975p = .618p = .113p = .513p = .845p = .245p = .380p = .342
Common–Originalp = .181p = .266p = .406p = .269p = .156p = .582p = .064p = .949p = .547

Note: Factor p value for each umbrella stand (A, B, C) and Semantic Scale in which perceptual differences were found are shown in bold.

Discussion

In our first hypothesis (H1), we stated that the purchasing decision was not affected by the change of medium. Based on the results of our study, it seems that the presentation medium does not have a significant effect on the intended purchasing decision for umbrella stands (no significant differences were found between means for any of the products: pA < .097, pB < .122, pC < .212). Our findings agree with other studies that suggest that although users need as much information as possible to make a purchasing decision [65], even a simple presentation medium provides sufficient details to perform a reasonable evaluation [66]. In our case, considering that a 7-point Likert scale was used for product evaluation, the scores obtained between different media (Table 2) did not vary significantly. In addition, the Confidence in the Response scores (Tables 3 and 4) were also high in the three visual media, which suggests that participants had sufficient information to make a purchasing decision.

Our results differ from those obtained in a similar study with coffee makers conducted by Palacios-Ibáñez et al. [23], where significant differences between means were found for the purchasing decision. These differences may be explained by the average age of the selected samples, which was very limited (mean age of 20 years old). In our study, the selected product may have not elicited a purchasing decision in any medium for umbrella stand A and C (Table 4). For the case of umbrella stand B, previous familiarity with the product could have influenced the results, as stated by Söderman [44]. Therefore, hypothesis H1 is rejected for the case of umbrella stands and our experimental conditions. To draw more generalizable conclusions, however, it is necessary to expand the study to other product typologies.

We note, however, that the descriptive statistics for this dataset (Table 4) show higher values in the VRPH condition, which may indicate that presenting the product in this type of setting may lead to an increase in intended purchasing decision. The user’s Confidence in the Response also shows higher values in this medium, which may have had a positive effect on the purchasing decision.

In H2, we postulated that the medium used to display the product influences user perception. Our results (Table 5) revealed significant differences between means for a subset of the bipolar pairs of adjectives in different categories, which confirms H2 for the case of umbrella stands. However, we cannot state which specific adjectives are influenced by the medium, as none of the adjectives were affected in more than one product. In the case of the umbrella stands, we note that certain categories are more sensitive to be affected by the presentation medium than others. Our results are in line with those by other authors. Artacho-Ramírez et al. obtained similar results when studying the influence of the graphical representation in the evaluation of different models of a loudspeaker [22], and Agost et al. reached similar conclusions when assessing two different types of furniture [46]. Palacios-Ibáñez et al. also reported similar results when evaluating three different designs of a coffee maker [23] and Galán et al. [24] did the same using an armchair as the main stimuli. This set of studies demonstrates that regardless of the product selected as stimulus, some characteristics may be more difficult to evaluate depending on the means used to assess it.

The psycho-pleasure category (pleasures related to people’s cognitive and emotional reactions) was highly affected by the presentation medium (Table 5). In this category, design A showed significant differences for “Decorative–Functional” between R–VR and VRPH–VR, whereas design C showed significant differences for “Simple–Complex” and “Impractical–Practical” between VR–VRPH (Table 6). In any case, scores in the VRPH setting were similar to the R setting (Table 2), which suggests that the haptic feedback may have influenced the evaluation of the product. Although previous studies have established that the sense of touch affects mostly adjectives related to the physio-pleasure category (derived from sensory organs) [67], its positive influence on the sense of presence may have also had an impact on product evaluation, as passive haptics significantly enhance virtual environments and can affect cognitive processes [21]. VR scored the lowest in the presence questionnaire as well as the level of Confidence in the Response for the affected adjectives.

Umbrella stand A also showed significant differences (Table 5) for “Small–Large” (physiological pleasure). This result is in line with other authors [24,25], who reported that pleasures deriving from sensory organs are better assessed with the addition of physical interaction. In this case, it was expected that differences appeared between R–VR and VRPH–VR (Table 6), as VR was the condition that offered the least amount of information due to the absence of tactile feedback. The scores for the VRPH condition were similar to those in the R setting, which was expected, since the product data collected from both means were similar (physical interaction and being able to view the product at full scale). Although we expected to find differences between umbrella stands B and C for this pleasure category and behave similarly to design A, it is important to note that the studies mentioned earlier were limited to only one product design and typology, so the findings are not necessarily generalizable to all designs or products as other factors may also impact product assessment [68].

Finally, for B and C, significant differences were found for “Traditional–Modern” between R–VR (Table 6). This Semantic Scale is part of the sociological pleasure category (pleasures deriving from relationships with others), which was the second most affected category and the only one that was affected for design B. These results could be explained by the participants’ previous knowledge of this product, as umbrella stand B was displayed in the university hall for several days. Previous studies reported that previous knowledge of the product can minimize perceptual differences between means [44].

In H3, we speculated that the Confidence in the Response is affected by the presentation medium. Cochran’s Q test results (Tables 7 and 8) showed differences between means for adjectives related to the physio-pleasure category, which confirms H3. The descriptive statistics for the Confidence in the Response (Table 3) also showed higher values for R, followed by VRPH and VR, respectively, which means that haptics can help to increase the participants’ Confidence in the Response. Some authors have argued that haptic feedback during product evaluation increases the user’s Confidence in the Response [56], which explains why the VR setting obtained the lowest scores for this dataset, and why statistically significant differences were found for those product features that relied heavily on the sense of touch. We note that significant differences were also found for umbrella stand C for “Decorative–Practical” between R–VR and R–VRPH. Some participants admitted during the experiment that they were unsure whether the product was an umbrella stand, so this aspect could have been difficult to evaluate.

Our results show that in 64% of the cases, the mean scores obtained in the VRPH medium for the semantic differential were similar to the R setting. Descriptive statistics for the Confidence in the Response (Tables 3 and 4) show the same result for 78% of the cases. According to Grohmann and Spangenberg [56], the introduction of tactile feedback has a positive impact on this factor, reducing perceptual variations. This claim is supported by the results in our two-factor ANOVA for the semantic differential (Table 5) and the post-hoc tests (Table 6). Regardless of the adjectives with statistical differences, no differences were found between R and VRPH. Finally, we highlight that the presentation of the product in a VRPH setting can positively influence the user’s purchasing decision, for the case of umbrella stands. Considering the average age of our sample, it would be interesting to test whether these results are consistent with older adults, since technology acceptance tends to be negatively associated with age [69].

Our results show that the use of touch during the virtual experience could be an effective alternative for product evaluation in the specific case of umbrella stands. Studies with other product typologies are required to draw more generalized conclusions.

To test H4 (how user expertise and design background affect product evaluation), we performed a Two-factor Mixed ANOVA for the Semantic Scales (Table 5). Results showed that “Traditional–Modern” and “Ordinary–Elegant” were influenced by the participant’s design background for design A, which means that this factor is not associated with a specific design typology or adjectives but can affect the perception of certain product characteristics. The psycho and ideo categories showed a combined factor between the medium and the participant’s design expertise. Some studies have reported that in product evaluation individuals with a design-related background tend to employ more higher-level cognitive abilities than untrained individuals [52]. Therefore, the influence of design expertise on the psycho-pleasure category was expected. The descriptive statistics for the Confidence in the Response showed that having a design background does not increase the participant’s Confidence in the Response toward product evaluation, which aligns with the study of Forbes et al. [50] and confirms H4. Making an assessment about specific product features is a subjective task, so having a background in design does not necessarily increase this level confidence.

Finally, a two-factor mixed ANOVA (Table 11) was performed to test H5 (the use of a complementary item to assess a product influences the evaluation of the product Semantic Scales). Our goal was to create a richer and more sophisticated interactive experience which is sometimes limited in online media to determine the impact on product perception. Our results showed that the presence of the complementary item is a factor that significantly influences product perception, which confirms H5. For umbrella stands, the physio-pleasure category was highly affected by this factor. The use of a physical umbrella while evaluating the main stimuli influenced the assessment of “Small–Large” for umbrella stands B and C. The descriptive statistics (Table 10) show that designs A and C were perceived as smaller when using the umbrella, whereas product B was perceived as larger. Likewise, Jordan’s psycho-pleasure category was the second most affected using the complimentary item. The psycho-pleasure category is related to people’s cognitive and emotional reactions toward a product, so the complementary item may have provided additional information about the product’s performance. The descriptive statistics show that the umbrella stands are perceived as more practical when the umbrella is used (67% of the cases), so a more sophisticated experience where all the necessary elements for product testing are present can improve product perception thanks to the availability of more information [8].

A summary of our results is shown in Table 12.

Table 12

Summary of results and findings of the study

HypothesisAccept/rejectComments
H1RejectedThe purchase decision was not influenced by the presentation medium for the case of umbrella stands for two possible reasons: the participant may have had enough information to make a purchase decision in each experimental condition, or the selected product may not have triggered a purchase decision in any medium
H2AcceptedUser perception is influenced by the visual medium used to present the product. Significant differences between means were found for a subset of adjectives in different categories, but we cannot state which specific adjectives are influenced by the medium, as none of the adjectives were affected in more than one product
H3AcceptedConfidence in the response was affected by the presentation medium. Differences between means were found for adjectives related to the physio-pleasure category, and descriptive statistics for this dataset showed that haptics can help increase the participants’ Confidence in the Response
H4AcceptedTwo pairs of adjectives were influenced by the participant’s design background for one of the designs. Making an assessment about specific product features is a subjective task, so having a background in design does not necessarily increase this level of confidence
H5AcceptedThe use of a complementary element (when needed) improved the evaluation of the product thanks to the additional information provided
HypothesisAccept/rejectComments
H1RejectedThe purchase decision was not influenced by the presentation medium for the case of umbrella stands for two possible reasons: the participant may have had enough information to make a purchase decision in each experimental condition, or the selected product may not have triggered a purchase decision in any medium
H2AcceptedUser perception is influenced by the visual medium used to present the product. Significant differences between means were found for a subset of adjectives in different categories, but we cannot state which specific adjectives are influenced by the medium, as none of the adjectives were affected in more than one product
H3AcceptedConfidence in the response was affected by the presentation medium. Differences between means were found for adjectives related to the physio-pleasure category, and descriptive statistics for this dataset showed that haptics can help increase the participants’ Confidence in the Response
H4AcceptedTwo pairs of adjectives were influenced by the participant’s design background for one of the designs. Making an assessment about specific product features is a subjective task, so having a background in design does not necessarily increase this level of confidence
H5AcceptedThe use of a complementary element (when needed) improved the evaluation of the product thanks to the additional information provided

Implications for Design Practice.

Traditional prototyping processes in product development can be costly. Although new means of representation have emerged in recent years, several authors have shown that the user's perceptual response to a particular product may not be the same depending on the medium used to present it.

Some researchers have begun to explore virtual experiences that leverage the sense of touch, but little research has examined the impact of displaying nonintrusive virtual hand models in the environment. The capability is currently available on low-cost VR devices, but its impact on product evaluation has not yet been studied.

In our study, the hypothesis on the purchasing decision could not be confirmed due to certain limitations of our experiments. Nevertheless, our results highlight the need for designers to conduct user studies on potential customers. We have also shown that the user’s background (i.e., designers versus nondesigners) can influence product assessment. Therefore, the characteristics of the potential buyers must be reflected in the sample.

For our dataset, the VRPH medium scored higher, which suggests that the medium is important when presenting products at the point of sale and that, in some cases, the right medium can increase purchasing decisions. On a practical level, both designers and retailers should allow users to physically interact with their products, since it can result in more favorable emotional responses.

On the other hand, the medium used to present a product may result in perceptual differences during evaluation, as some features of the real product may be difficult or impossible to replicate virtually. In our case, leveraging the sense of touch seems to minimize these differences. The use of passive haptics in the virtual experience for product evaluation may be an effective strategy. From a practical point of view, for some type of products, 3D printers can be used to create physical mockups that enable touch capabilities in the VR environment. In this regard, future development of object tracking capabilities and more accurate hand-tracking performance of HMDs will facilitate the creation of VRPH experiences.

The user’s Confidence in the Response during the virtual experience also increases with the use of tactile feedback, which can have a positive impact during the evaluation. Finally, based on the results related to the use of the complementary item, it is important for designers and marketing professionals provide consumers with all the necessary elements to interact with the product at the time of evaluation. Otherwise, valuable information may remain hidden, distorting the emotional response of the user toward the product.

Conclusions

Participants in our study were exposed to three products (i.e., umbrella stands) in different visual media: a real setting (R), VR, and VRPH. For the evaluation, participants used a semantic differential composed by 12 bipolar pairs of adjectives divided by Jordan’s pleasure categories. In addition, they were asked to provide a purchase decision and rate the level of confidence in their responses. We also gathered information about the user presence in the virtual environments using the SUS presence questionnaire.

Our results demonstrate that the visual medium used to present a product can influence how it is perceived. We also showed that an individual’s background as well as the use of a complementary item during product evaluation (when needed) can also influence product perception.

Our results showed that not all pleasure categories in Jordan’s classification were influenced equally by the visual medium. Although some studies highlight the importance of touch on the evaluation of a product, our study also showed that features that do not require haptics can also be affected by the medium. For the case of umbrella stands, the psycho-pleasure category was the most affected, and the ideo-pleasure category was the only one not influenced by the medium.

Our results showed that the differences in the Semantic Scales between visual media were due to the absence of the sense of touch during the virtual experience. The absence of tactile feedback had a negative influence on the user's level of Confidence in the Response, which influenced the results. Therefore, it is proposed that the use of VRPH can be an effective tool for product evaluation for product conceptualization, particularly in experiences that leverage the sense of touch.

Our study also demonstrated that an individual’s background does not influence the Confidence in the Response, but it can influence the assessment of certain product features, particularly in the psychological and ideological pleasure category. Finally, the use of a complementary item (when needed) for product evaluation can also affect product perception, as additional information on product performance becomes available.

We acknowledge the limitations of our study. First, the particular characteristics of the participants in our sample make our findings only applicable to equivalent user groups. Second, some participants were familiar with one of the designs selected as stimulus. Finally, although our findings could potentially be extrapolated to similar products of the same typology, additional tests with a more diverse set of products are recommended to draw more conclusive results.

Our work contributes to the area of product design by empirically assessing the reliability of VR and VRPH as a tool for product evaluation in the early stages of the product development. In future studies, we plan to use physiological measures such as eye-tracking technologies to analyze user behavior more accurately and objectively during product evaluation activities.

Funding Data

This research work has been supported by the Spanish Ministry of Education and Vocational Training under a FPU fellowship (FPU19/03878).

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

References

1.
Coutts
,
E. R.
,
Wodehouse
,
A.
, and
Robertson
,
J.
,
2019
, “
A Comparison of Contemporary Prototyping Methods
,”
Proceedings of the Design Society: International Conference on Engineering Design
, July 26,
pp. 1313–1322
.
2.
Tiainen
,
T.
,
Ellman
,
A.
, and
Kaapu
,
T.
,
2014
, “
Virtual Prototypes Reveal More Development Ideas: Comparison Between Customers’ Evaluation of Virtual and Physical Prototypes: This Paper Argues That Virtual Prototypes Are Better Than Physical Prototypes for Consumers-Involved Product Development
,”
Virtual Phys. Prototyp.
,
9
(
3
), pp.
169
180
.
3.
Ye
,
J.
,
Badiyani
,
S.
,
Raja
,
V.
, and
Schlegel
,
T.
,
2007
, “
Applications of Virtual Reality in Product Design Evaluation
,”
Human–Computer Interaction. HCI Applications and Services
,
Beijing, China
,
July 22–27
, pp.
1190
1199
.
4.
Lau
,
H. Y. K.
,
Mak
,
K. L.
, and
Lu
,
M. T. H.
,
2003
, “
A Virtual Design Platform for Interactive Product Design and Visualization
,”
J. Mater. Process. Technol.
,
139
(
1–3
), pp.
402
407
.
5.
Evans
,
G.
,
Hoover
,
M.
, and
Winer
,
E.
,
2020
, “
Development of a 3D Conceptual Design Environment Using a Head Mounted Display Virtual Reality System
,”
J. Softw. Eng. Appl.
,
13
(
10
), pp.
258
277
.
6.
Cooper
,
R. G.
,
2019
, “
The Drivers of Success in New-Product Development
,”
Ind. Mark. Manag.
,
76
, pp.
36
47
.
7.
Bordegoni
,
M.
,
2011
, “Product Virtualization: An Effective Method for theEvaluation of Concept Design of New Products,”
Innovation in Product Design
,
M.
Bordegoni
and
C.
Rizzi
, eds.,
Springer
,
London
, pp.
117
141
.
8.
Hannah
,
R.
,
Joshi
,
S.
, and
Summers
,
J. D.
,
2012
, “
A User Study of Interpretability of Engineering Design Representations
,”
J. Eng. Des.
,
23
(
6
), pp.
443
468
.
9.
Chu
,
C.-H.
, and
Kao
,
E.-T.
,
2020
, “
A Comparative Study of Design Evaluation With Virtual Prototypes Versus a Physical Product
,”
Appl. Sci.
,
10
(
14
), p.
4723
.
10.
Virzi
,
R. A.
,
Sokolov
,
J. L.
, and
Karis
,
D.
,
1996
, “
Usability Problem Identification Using Both Low- and High-Fidelity Prototypes
,”
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Common Ground—CHI ‘96
,
Vancouver, Canada
,
Apr. 13–18
, pp. 236–243.
11.
Gibson
,
I.
,
Gao
,
Z.
, and
Campbell
,
I.
,
2004
, “
A Comparative Study of Virtual Prototyping and Physical Prototyping
,”
Int. J. Manuf. Technol. Manage.
,
6
(
6
), pp.
503
522
.
12.
Berni
,
A.
,
Maccioni
,
L.
, and
Borgianni
,
Y.
,
2020
, “
An Eye-Tracking Supported Investigation Into the Role of Forms of Representation on Design Evaluations and Affordances of Original Product Features
,”
Proceedings of the Design Society: DESIGN Conference
,
Cavtat, Croatia
,
Oct. 26–29
, pp. 1607–1616.
13.
Cecil
,
J.
, and
Kanchanapiboon
,
A.
,
2007
, “
Virtual Engineering Approaches in Product and Process Design
,”
Int. J. Adv. Manuf. Technol.
,
31
(
9–10
), pp.
846
856
.
14.
Kent
,
L.
,
Snider
,
C.
,
Gopsill
,
J.
, and
Hicks
,
B.
,
2021
, “
Mixed Reality in Design Prototyping: A Systematic Review
,”
Des. Stud.
,
77
, p.
101046
.
15.
Ranaweera
,
A. T.
,
Martin
,
B. A. S.
, and
Jin
,
H. S.
,
2021
, “
What You Touch, Touches You: The Influence of Haptic Attributes on Consumer Product Impressions
,”
Psychol. Mark.
,
38
(
1
), pp.
183
195
.
16.
Jerald
,
J.
,
2015
, “What is Virtual Reality?,”
The VR Book
,
M.
Tamer Özsu
, ed.,
Association for Computing Machinery Books
,
New York
, p.
9
.
17.
Wang
,
J.
, and
Lu
,
W. F.
,
2012
, “
A Haptics-Based Virtual Simulation System for Product Design
,”
Int. J. Comput. Appl. Technol.
,
45
(
2–3
), pp.
163
170
.
18.
Kreimeier
,
J.
,
Hammer
,
S.
,
Friedmann
,
D.
,
Karg
,
P.
,
Bühner
,
C.
,
Bankel
,
L.
,
Götzelmann
,
T.
, et al
,
2019
, “
Evaluation of Different Types of Haptic Feedback Influencing the Task-Based Presence and Performance in Virtual Reality
,”
Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments
,
Rhodes, Greece
,
June 5–7
, pp. 289–298.
19.
Stamer
,
M.
,
Michaels
,
J.
, and
Tümler
,
J.
,
2020
, “
Investigating the Benefits of Haptic Feedback During In-Car Interactions in Virtual Reality
,”
HCI in Mobility, Transport, and Automotive Systems. Automated Driving and In-Vehicle Experience Design
,
Copenhagen, Denmark
,
July 19–24
, pp.
404
416
.
20.
Lindeman
,
R. W.
,
Sibert
,
J. L.
, and
Hahn
,
J. K.
,
1999
, “
Hand-Held Windows: Towards Effective 2D Interaction in Immersive Virtual Environments
,”
Proceedings of the Virtual Reality Annual International Symposium
,
Houston, TX
,
Mar. 13–17
.
21.
Insko
,
B. E.
,
2001
,
Passive Haptics Significantly Enhances Virtual Environments
,
University of North Carolina
,
Chapel Hill, NC
.
22.
Artacho-Ramírez
,
M. A.
,
Diego-Mas
,
J. A.
, and
Alcaide-Marzal
,
J.
,
2008
, “
Influence of the Mode of Graphical Representation on the Perception of Product Aesthetic and Emotional Features: An Exploratory Study
,”
Int. J. Ind. Ergon.
,
38
(
11–12
), pp.
942
952
.
23.
Palacios-Ibáñez
,
A.
,
Ochando-Martí
,
F.
,
Camba
,
J. D.
, and
Contero
,
M.
,
2022
, “
The Influence of the Visualization Modality on Consumer Perception: A Case Study on Household Products
,”
13th International Conference on Applied Human Factors and Ergonomics
,
New York
,
July 24–28
, Vol. 49, pp. 160–166http://doi.org/10.54941/ahfe1002050.
24.
Galán
,
J.
,
Felip
,
F.
,
García-García
,
C.
, and
Contero
,
M.
,
2021
, “
The Influence of Haptics When Assessing Household Products Presented in Different Means: A Comparative Study in Real Setting, Flat Display, and Virtual Reality Environments With and Without Passive Haptics
,”
J. Comput. Des. Eng.
,
8
(
1
), pp.
330
342
.
25.
Galán
,
J.
,
García-García
,
C.
,
Felip
,
F.
, and
Contero
,
M.
,
2021
, “
Does a Presentation Media Influence the Evaluation of Consumer Products? A Comparative Study to Evaluate Virtual Reality, Virtual Reality With Passive Haptics and a Real Setting
,”
Int. J. Interact. Multimed. Artif. Intell.
,
6
(
6
), p.
196
.
26.
Felip
,
F.
,
Galán
,
J.
,
García-García
,
C.
, and
Mulet
,
E.
,
2019
, “
Influence of Presentation Means on Industrial Product Evaluations With Potential Users: A First Study by Comparing Tangible Virtual Reality and Presenting a Product in a Real Setting
,”
Virtual Real.
,
24
(
3
), pp.
439
451
.
27.
Serrano
,
B.
,
Botella
,
C.
,
Baños
,
R. M.
, and
Alcañiz
,
M.
,
2013
, “
Using Virtual Reality and Mood-Induction Procedures to Test Products With Consumers of Ceramic Tiles
,”
Comput. Hum. Behav.
,
29
(
3
), pp.
648
653
.
28.
Steuer
,
J.
,
1992
, “
Defining Virtual Reality: Dimensions Determining Telepresence
,”
J. Commun.
,
42
(
4
), pp.
73
93
.
29.
Zubair
,
S.
,
Ansari
,
A.
,
Shukla
,
V. K.
,
Saxena
,
K.
, and
Filomeno
,
B.
,
2022
,
Implementing Virtual Reality in Entertainment Industry
, Vol.
291
,
Springer Singapore
,
Singapore
.
30.
Aziz
,
H.
,
2018
, “
Virtual Reality Programs Applications in Healthcare
,”
J. Health Med. Informat.
,
9
(
1
), p.
305
.
31.
Camba
,
J. D.
,
Soler
,
J. L.
, and
Contero
,
M.
,
2017
, “Immersive Visualization Technologies to Facilitate Multidisciplinary Design Education,”
International Conference on Learning and Collaboration Technologies
, Vol.
10295
,
P.
Zaphiris
and
A.
Ioannou
, eds.,
Springer International Publishing
,
Cham
, pp.
3
11
.
32.
Park
,
J.
,
Lennon
,
S. J.
, and
Stoel
,
L.
,
2005
, “
On-Line Product Presentation: Effects on Mood, Perceived Risk, and Purchase Intention
,”
Psychol. Mark.
,
22
(
9
), pp.
695
719
.
33.
Kuliga
,
S. F.
,
Thrash
,
T.
,
Dalton
,
R. C.
, and
Hölscher
,
C.
,
2015
, “
Virtual Reality As an Empirical Research Tool—Exploring User Experience in a Real Building and a Corresponding Virtual Model
,”
Comput. Environ. Urban Syst.
,
54
, pp.
363
375
.
34.
Berni
,
A.
, and
Borgianni
,
Y.
,
2020
, “
Applications of Virtual Reality in Engineering and Product Design: Why, What, How, When and Where
,”
Electronics
,
9
(
7
), pp.
1
29
.
35.
Kamil
,
M. J. M.
, and
Abidin
,
S. Z.
,
2013
, “
Unconscious Human Behavior at Visceral Level of Emotional Design
,”
Procedia – Soc. Behav. Sci.
,
105
, pp.
149
161
.
36.
Aftab
,
M.
, and
Rusli
,
H. A.
,
2017
, “
Designing Visceral, Behavioural and Reflective Products
,”
Chin. J. Mech. Eng.
,
30
(
5
), pp.
1058
1068
.
37.
Li
,
S. M.
,
Chan
,
F. T. S.
,
Tsang
,
Y. P.
, and
Lam
,
H. Y.
,
2021
, “
New Product Idea Selection in the Fuzzy Front End of Innovation: A Fuzzy Best-Worst Method and Group Decision-Making Process
,”
Mathematics
,
9
(
4
), pp.
1
18
.
38.
Desmet
,
P.
,
Overbeeke
,
K.
, and
Tax
,
S.
,
2001
, “
Designing Products With Added Emotional Value: Development and Application of an Approach for Research Through Design
,”
Des. J.
,
4
(
1
), pp.
32
47
.
39.
Holbrook
,
M. B.
,
1986
,
Emotion in the Consumption Experience: Toward a New Model of the Human Consumer
, Vol.
6
,
Lexington Books
,
Lanham, MD
.
40.
Desmet
,
P.
,
2007
, “
Nine Sources of Product Emotion
,”
International Association of Societies of Design Research
,
Hung Hom, Hong Kong
,
Nov. 12–15
.
41.
Tiger
,
L.
,
1992
,
The Pursuit of Pleasure
,
Little, Brown & Company
,
Boston
.
42.
Desmet
,
P.
,
2002
,
Designing Emotions
,
Delft University of Technology
,
Delft, The Netherlands
.
43.
Norman
,
D. A.
,
2004
,
Emotional Design
,
Basic Books
,
New York
.
44.
Söderman
,
M.
,
2005
, “
Virtual Reality in Product Evaluations With Potential Customers: An Exploratory Study Comparing Virtual Reality With Conventional Product Representations
,”
J. Eng. Des.
,
16
(
3
), pp.
311
328
.
45.
Reid
,
T. N.
,
MacDonald
,
E. F.
, and
Du
,
P.
,
2013
, “
Impact of Product Design Representation on Customer Judgment
,”
ASME J. Mech. Des.
,
135
(
9
), p. 091008.
46.
Agost
,
M. J.
,
Vergara
,
M.
, and
Bayarri
,
V.
,
2021
, “
The Use of New Presentation Technologies in Electronic Sales Environments and Their Influence on Product Perception
,”
International Conference on Human–Computer Interaction
,
Virtual Event
,
July 24–29
, pp. 3–15.
47.
Sylcott
,
B.
,
Orsborn
,
S.
, and
Cagan
,
J.
,
2016
, “
The Effect of Product Representation in Visual Conjoint Analysis
,”
ASME J. Mech. Des.
,
138
(
10
), p.
101104
.
48.
Ray
,
S.
, and
Choi
,
Y. M.
,
2017
, “
Employing Design Representations for Userfeedback in the Product Design Lifecycle
,”
DS 87-4 Proceedings of the 21st International Conference on Engineering Design (ICED 17), Design Methods and Tools, Vol. 4
,
Vancouver, Canada
,
Aug. 21–25
, pp.
563
572
.
49.
Palacios-Ibáñez
,
A.
,
Navarro-Martínez
,
R.
,
Blasco-Esteban
,
J.
,
Contero
,
M.
, and
Camba
,
J. D.
,
2022
, “
On the Application of Extended Reality Technologies for the Evaluation of Product Characteristics During the Initial Stages of the Product Development Process
,”
Comput. Ind.
,
144
, p.
103780
(in press).
50.
Forbes
,
T.
,
Barnes
,
H.
,
Kinnell
,
P.
, and
Goh
,
M.
,
2018
, “
A Study Into the Influence of Visual Prototyping Methods and Immersive Technologies on the Perception of Abstract Product Properties
,”
DS 91: Proceedings of NordDesign 2018
,
Linköping, Sweden
,
Aug. 14–17
.
51.
Whitfield
,
A.
, and
Wiltshire
,
J.
,
1982
, “
Design Training and Aesthetic Evaluation: An Intergroup Comparison
,”
J. Environ. Psychol.
,
2
(
2
), pp.
109
117
.
52.
Solso
,
R. L.
,
2001
, “
Brain Activities in a Skilled Versus a Novice Artist: An fMRI Study
,”
Leonardo
,
34
(
1
), pp.
31
34
.
53.
Nodine
,
C. F.
,
Locher
,
P. J.
, and
Krupinski
,
E. A.
,
1993
, “
The Role of Formal Art Training on Perception and Aesthetic Judgment of Art Compositions
,”
Leonardo
,
26
(
3
), p.
219
.
54.
Solso
,
R. L.
,
Vogt
,
S.
, and
Magnussen
,
S.
,
2007
, “
Expertise in Pictorial Perception: Eye-Movement Patterns and Visual Memory in Artists and Laymen
,”
Leonardo
,
36
(
1
), pp.
91
100
.
55.
Hsu
,
S. H.
,
Chuang
,
M. C.
, and
Chang
,
C. C.
,
2000
, “
A Semantic Differential Study of Designers’ and Users’ Product Form Perception
,”
Int. J. Ind. Ergon.
,
25
(
4
), pp.
375
391
.
56.
Grohmann
,
B.
,
Spangenberg
,
E. R.
, and
Sprott
,
D. E.
,
2007
, “
The Influence of Tactile Input on the Evaluation of Retail Product Offerings
,”
J. Retail.
,
83
(
2
), pp.
237
245
.
57.
Peck
,
J.
, and
Childers
,
T. L.
,
2003
, “
To Have and to Hold: The Influence of Haptic Information on Product Judgments
,”
J. Mark.
,
67
(
2
), pp.
35
48
.
58.
Hagtvedt
,
H.
, and
Adam Brasel
,
S.
,
2017
, “
Color Saturation Increases Perceived Product Size
,”
J. Consum. Res.
,
44
(
2
), p.
ucx039
.
59.
Slater
,
M.
,
McCarthy
,
J.
, and
Maringelli
,
F.
,
1998
, “
The Influence of Body Movement on Subjective Presence in Virtual Environments
,”
Hum. Factors J. Hum. Factors Ergon. Soc.
,
40
(
3
), pp.
469
477
.
60.
Osgood
,
C. E.
,
Suci
,
G. J.
, and
Tannenbaum
,
P. H.
,
1957
,
The Measurement of the Meaning
,
University of Illinois Press
,
Champaign, IL
.
61.
Vergara
,
M.
,
Mondragón
,
S.
,
Sancho-Bru
,
J. L.
,
Company
,
P.
, and
Agost
,
M. J.
,
2011
, “
Perception of Products by Progressive Multisensory Integration. A Study on Hammers
,”
Appl. Ergon.
,
42
(
5
), pp.
652
664
.
62.
Jordan
,
P. W.
,
2002
,
Designing Pleasurable Products: An Introduction to the New Human Factors
,
CRC Press
,
London
.
63.
Higgins
,
J. J.
,
Blair
,
R. C.
, and
Tashtoush
,
S.
,
1990
, “
The Aligned Rank Transform Procedure
,”
Conference on Applied Statistics in Agriculture
,
Kansas, MO
, pp. 185–195.
64.
Mansouri
,
H.
,
Paige
,
R. L.
, and
Surles
,
J. G.
,
2004
, “
Aligned Rank Transform Techniques for Analysis of Variance and Multiple Comparisons
,”
Commun. Stat. – Theory Methods
,
33
(
9 Spec. Iss.
), pp.
2217
2232
.
65.
O’Keefe
,
R. M.
, and
McEachern
,
T.
,
1998
, “
Web-Based Customer Decision Support Systems
,”
Commun. ACM
,
41
(
3
), pp.
71
78
.
66.
Ant Ozok
,
A.
, and
Komlodi
,
A.
,
2009
, “
Better in 3D? An Empirical Investigation of User Satisfaction and Preferences Concerning Two-Dimensional and Three-Dimensional Product Representations in Business-to-Consumer e-Commerce
,”
Int. J. Hum. Comput. Interact.
,
25
(
4
), pp.
243
281
.
67.
Marquis
,
J.
, and
Deeb
,
R. S.
,
2018
, “
Roadmap to a Successful Product Development
,”
IEEE Eng. Manage. Rev.
,
46
(
4
), pp.
51
58
.
68.
Achiche
,
S.
,
Maier
,
A.
,
Milanova
,
K.
, and
Vadean
,
A.
,
2014
, “
Visual Product Evaluation: Using the Semantic Differential to Investigate the Influence of Basic Geometry on User Perception
,”
ASME International Mechanical Engineering Congress and Exposition
,
Montreal, Canada
,
Nov. 14–20
, pp.
1
10
.
69.
Hauk
,
N.
,
Hüffmeier
,
J.
, and
Krumm
,
S.
,
2018
, “
Ready to Be a Silver Surfer? A Meta-Analysis on the Relationship Between Chronological Age and Technology Acceptance
,”
Comput. Human Behav.
,
84
, pp.
304
319
.