How to Bid Success in Crowdsourcing Contest?—Evidence from the Translation Tasks of Tripadvisor

Kunxiang DONG, Yan SUN, Zongxiao XIE, Jie ZHEN

Journal of Systems Science and Information ›› 2020, Vol. 8 ›› Issue (2) : 170-184.

PDF(276 KB)
PDF(276 KB)
Journal of Systems Science and Information ›› 2020, Vol. 8 ›› Issue (2) : 170-184. DOI: 10.21078/JSSI-2020-170-15
 

How to Bid Success in Crowdsourcing Contest?—Evidence from the Translation Tasks of Tripadvisor

Author information +
History +

Abstract

Material incentive is the main motivation for solvers to attend crowdsourcing tasks. So raising the bidding success rate is benefit to inspire the solvers attendance' and increase the answering quality. This paper analyzes the effect of participation experience, task-fit capability, participation strategy and task attribute on the solvers bidding success by the solvers attending the series tasks of Tripadvisor. The results show that: 1) Participation times enrich the participation experiences and promote the bidding success, while bidding success times and last performances lower the bidding success because of the cognitive fixation; 2) The chance of bidding success will be increase when the solver own high task-fit capability; 3) The relationship between task submit sequence and bidding success is the type of reverse U shape, and the optimal submit sequence rate on the top of the reverse U shape; 4) Higher task difficulty lower bidding success, while higher task density easier bidding success.

Cite this article

Download Citations
Kunxiang DONG , Yan SUN , Zongxiao XIE , Jie ZHEN. How to Bid Success in Crowdsourcing Contest?—Evidence from the Translation Tasks of Tripadvisor. Journal of Systems Science and Information, 2020, 8(2): 170-184 https://doi.org/10.21078/JSSI-2020-170-15

1 Introduction

Nowadays, the masses take part in economic activities jointly. New participating methods and innovation collaboration forms are altering ways of production, marketing, distribution, services and innovation. The crowdsourcing model provides an equal and free communication platform for free workers as well as enterprise innovation. It also creates opportunities for knowledge sharing among the individual and enterprises. Meanwhile, new business models are stimulated continuously[1]. Application of crowdsourcing competitions instead of specific vendors' outsourcing can not only reduce the cost of R & D, but improve the innovation efficiency[2, 3]. The crowdsourcing model provides a platform for the masses to develop skills, interests, and generate additional revenue online. For enterprises, think tanks with collective intelligence are created, moreover, new development directions are pointed in operation management and R & D. Although, crowdsourcing competitions have made the masses more flexible, personalized, and revenue fragmented in how they work and think. In current crowdsourcing competitions, the masses are still in an awkward position with low solution skills, low winning rate and low yield. Therefore, many solvers lost the enthusiasm to participate, thus forming a large group of "Zombie Witkey". Therefore, it is necessary to increase the probability of successful bids for solvers by clarifying winning factors, improve solvers' performance as well as attendance, thus prospering the crowdsourcing market.
In crowdsourcing competitions, improving successful bids rate and obtaining the reward have become the most important motives for solvers to participate[4]. The improvement in the probability of successful bids can not only stimulate solvers to attendant, but also improve the answering quality[5] and reducing the frauds and quality disputes[6]. Terwiesch and Xu[7] argued that the competition intensity, task designs and awards will affect solvers successful bids. Based on the empirical analysis of online labor crowdsourcing markets, Barnes, et al.[8] found that solvers' capacity, participation experience and market conditions also affect solvers' bidding. Khasraghi and Aghaie[9] further explored the impact of participation experience among successful bidders, finding that proximate causes of participation, performance of the last successful bid and number of successful bids had an impact on successful bidders. Mahr, et al.[10] study the influence of different problem-solving methods and innovation levels towards the probability of successful bids, finding that closed innovation and solution methods are not conducive for winning the bid. In addition to factors mentioned above, Shao, et al.[11], Goodman & Paolacci[12] also find that the task attribute such as task duration, difficulty, density and description is one of the factors that influenced the successful bids. Furthermore, participation strategies[13, 14], task communication among solvers[15, 16], feedback from task publishers[17], establishment of platform fitting mechanism[4] and improvement of fraud prevention mechanism[6] also affect solvers successful bids.
References mentioned above only studied on the direct influencing of every factor on solvers' bidding. Only references from Khasraghi and Aghaie[9] have studied the moderating effect of participation proximate causes and the number of successful bids on the relationship between other factors and solvers' successful bids. However, task attributes, such as task difficulty, affect solvers' judgments about the efforts paid[18], and then affect the relationship between solvers' engagement strategies and successful bids. Therefore, task attributes not only bring direct influences but also indirectly affect other factors in bidding.
To sum up, at present, researches on factors of successful bidders are relatively discrete. There are lack literatures discussing the moderating effect of task attributes or comprehensive studies of impacts of successful bidders from four dimensions, which are the participation experience, task-fit capability, participation strategy and task attributes. Therefore, through the data of solvers translation tasks on the Tripadvisor: 1) Study the effects of participation experience, task-fit capability, engagement strategies and task attributes on successful bidders; 2) Analyze moderating effects of task attributes, task-fit capability and the number of successful bids. 3) Find an inverse U-shaped curve between the order of submission and probability of successful bids, and obtain the optimal order of submission is about 38% of the bids number.

2 Research Theories and Assumptions

2.1 The Impact of Participation Experience on Solvers' Bidding Success

The participation experience includes the number of participation in crowdsourcing competitions, number of successful bids and performance of the last successful bid. Solvers with rich experience will quickly understand the task background and publisher's expectations, thus making responses positively[9]. Bockstedt et al.[19] believe that experienced participants have higher successful bidding rate in subsequent missions than those who have not participated in similar tasks. However, Bayus[16] found that solvers' level of innovation is constrained by previous successful experience, and the lowered program creativity leads to a decline in the successful bid rate. According to the analysis above, the more solvers participate, the more experience they have. The probability of successful bids increases by frequently participating in crowdsourcing competitions. However, due to the fixed cognitive effect, solvers' follow-up behavior is negatively affected by the way they wining a bid[10]. If solvers are unable to diversify their capacity, future submissions will fail due to the loss of innovation. Therefore, the following assumptions can be made:
H1a: The number of participation has a positive impact on solvers' bidding success.
H1b: The number of successful bids has a negative impact on solvers' bidding success.
In addition to the number of participation and number of successful bids, whether or not the answer won the bid last time also affects the probability of winning the bid. If solvers won the bid last time, it not only enriches solver's participation experience[19], but also improves the solver's self-efficacy in participating crowdsourcing competitions. A higher sense of self-efficacy gives solvers more confidence to believe that they can complete tasks, and significantly improve the performance of solvers[9]. However, according to Bayus[16], the last successful bid has a negative cognitive impact on successful bidders. In summary, there are differences in conclusions of researches on the impact of the last successful bids, but it is certain that the last successful bid leaves a significant impact on the success of the current bid. Therefore, the following assumption can be made:
H1c: The last successful bid has an impact on the success of the current bid.

2.2 The Impact of Participation Experience on Solvers' Bidding Success

The task-fit capability reflects the solvers' ability to quickly judge the probability of winning a crowdsourcing task based on their knowledge of skills, available time, and publisher's needs[20]. Solvers with low task-fit capability need to repeatedly search for right tasks on crowdsourcing platforms, which will waste a lot of time and searching costs. High searching cost not only dispels solvers enthusiasm towards participating, but also leads to the inaccuracy of the task-fit in the searching process, which makes the quality of submitted tasks worse than requirements and the bid fails[4].
However, solvers with a higher task-fit capability can quickly identify tasks that match with their own abilities, which would improve publishers' recognition of their capabilities, reduces searching costs, and improves the efficiency of answering and successful bids rate increases[20]. Therefore, the following assumption can be made:
H2: Solvers' ability–the task-fit capability has a positive impact on bidding success.

2.3 The Impact of Participation Strategies on Solvers' Bidding Success

The influence of the information obtained by in the previous task on the current task is called the proximate cause of participation, including the obtained experience and perceptive risk of participating in current competition[9]. Facing an increasingly complex decision-making environments, people tend to synthesize the latest information while making judgments, and the latest information has far more strong impact on the current situation than the earlier one[21]. Khasraghi and Aghaie[9] believe that the participation proximate cause can be measured by time intervals between the last task participation and the current one. Results of the study show that the proximate cause of participation has a negative impact on solvers. That is to say, the longer the time interval between the last task and the current one, the less impact that solvers' last successful bids experience has on the current bid. Therefore, the following assumption can be made:
H3a: The proximate cause of participation has a negative impact on solvers' bidding success.
Through the researches of Pennington and Hastie[22], the first message of decision makers had the greatest impact on their final judgment, that is, decision makers are influenced by the information accepting order. Yang et al.[23] found that the last submitted solver is more likely to win the bid through the statistics data of Chinese solvers. However, the field experiment results of Liu et al.[24] show that among the submitted solutions, if there is a high-quality solution, experienced solvers will not participate any more. That is to say solvers of high quality should submit the plan in advance to exclude other competitive solvers in high quality[19]. It can be seen that solvers submit the plan too early may cause plagiarism and losing the initiative to observe the competition. If solvers submit plans too late, then it will be impossible for them to revise in time and then fail to meet the needs. Therefore, following assumptions can be made:
H3b: The submission order has an inverted U-shaped relationship on solvers' bidding success.

2.4 The Impact of Task Attributes on Solvers' Bidding Success

Task attributes of crowdsourcing competitions include task difficulty [25] and task density[11]. Task difficulty not only describes tasks requirements, forms of solutions and results quality, but also attracts solvers with different participation preferences[18]. Researches made by Shao et al.[11] show that the lower the task difficulty, the easier it is to attract participants. Although more participants can increase the success rate of tasks, they will also reduce the chances of individual success among solvers. In addition, competent solvers are more inclined to participate in challenging tasks, which makes it more difficult for solvers who participate in challenging tasks to win the bid[26]. As a result, solvers have no motivation to participate in tasks with uncertain returns and high complexity[27, 28]). Therefore, the following hypothesis can be made:
H4a: The task difficulty has a negative impact on participants' bidding success.
Crowdsourcing platforms will release many tasks of the same type and different types simultaneously. The tasks number in the same period is called the task density of that period. When the task density is high, solvers can participate in one or more competitive tasks, then the average number of solvers for per task decreases, and the competition among solvers decreases[11]. At the same time, due to the limitation of their ability and time available, solvers participating in multiple tasks will make less effort in each task, and the probability of winning will be reduced. Some solvers will choose appropriate tasks to answer according to their abilities, thus ensuring the quality of every solution. Accordingly, they can ensure the successful bid rate of each task[17]. Compared with the case of low task density, when the task density is high, the probability of winning is increased[12]. Therefore, the following hypothesis can be made:
H4b: The task density has a positive impact on participants' bidding success.

2.5 Moderating Effects

According to existing researches, the successful bid times have both positive and negative effects on the moderating effect of the relationship between solvers task-fit capability and successful bids. On the one hand, Huang et al.[28] argue that in crowdsourcing competitions, solvers improve their competence-task-fit capability through winning bids. Winning bids not only enable solvers to understand publishers' preferences, but also improve their correct estimates of submission quality or implementation cost, which is conducive to successful bidding. On the other hand, Bayus[16] argue that unless solvers' fixed cognitive effect is eliminated. Otherwise, the next answering is easily constrained by the previous successful bids. In addition, the data collected is Tripadvisor's task of comments translation. One of the goals of enterprises' ongoing crowdsourcing tasks is to expand brand influence, make more consumers know and use enterprises' services and products[3]. In order to expand brand publicity, enterprises' publishers will encourage solvers to participate in crowdsourcing tasks extensively, not only to obtain the best solution quality. Therefore, enterprises' publishers will try to disperse solvers of successful bids. Combining the above factors, the following hypothesis can be made:
H5: Under the same task-fit capability, it is not easy for the successful bidders with high successful bids rate to win the bid.
The competence-task-fit capability is an important index for solvers while choosing to participate in crowdsourcing competitions. The task-fit capability comes from solvers' skills and experience. When skills required by publishers' task are special, fewer solvers can fit in the corresponding abilities. However, when there are several competence matched solvers participating in the solution, successful bids rate is higher in the optimal submission order with certain task difficulty and competition intensity. Therefore, the following hypothesis can be made:
H6: Under the optimal submission order, the higher the task-fit capability, the easier it is for the solver to win the bid.
Bockstedt, et al.[19] believed that within the same task, the sooner solvers submit, the better solvers can grasp the competition form and win the bid. However, when solvers participate in different crowdsourcing tasks at the same time, due to restrictions of their own time and ability, low competent solvers can only participate in tasks with lower skills. On the other hand, high-competence solvers tend to participate in tasks with greater difficulty. When required skills of solvers are low, more solvers will be attracted to participate in it[11]. Under this circumstance, solvers who submit the proposal in advance are more likely to win[19]. However, when the difficulty is greater, the task takes solvers more time and effort to complete the solution. Therefore, solvers will submit solutions later than simple tasks. Therefore, the following hypothesis can be made:
H7a: The more difficult the task is, and the later the order of submitting solutions is, then the easier it is for solvers to win the bid.
The task density describes the ratio of the number of publishers to the number of solvers in crowdsourcing competitions at the same time, and measures the intensity of crowdsourcing competitions among solvers. The higher the task density, the more tasks solvers can participate in. The smaller the crowdsourcing competition of each task, the less efforts solvers will make in each task. Meanwhile, the number of solvers attracted by each task declined[11]. In order to obtain satisfactory answers, publisher would extend the competition time[18]. According to the research conclusions of Bockstedt et al.[19], when the number of solvers in a competition task is small, the advantage of early submitted tasks no longer exists. At this time, compared with low-quality solutions submitted earlier, high-quality solutions submitted later are more likely to win the bid[26]. Therefore, the following hypothesis can be made:
H7b: The higher the task density, the later the order of submitting solutions, the easier for solvers to get successful bids.
To sum up, we construct a research model (shown in figure 1), explaining how participation experience, task-fit capability, participation strategies and task attributes affect solvers' successful bidding.
Figure 1 Research framework

Full size|PPT slide

3 Research Design and Method

3.1 Data Sources

Tripadvisor is one of the largest and most popular tourist communities in the world. Monthly visits amounted to 260 million, with more than 10 million members and 150 million real comments from travelers. In order to ensure Chinese tourists' understand the foreign tourist attractions, hotels and other tourist facilities as well as services, Tripadvisor China website translates all the English comments on the website into Chinese. From June 20, 2013 to July 19, 2014, on China Task, Tripadvisor issued 1282 translation tasks of hotel reviews in various countries and regions in the way of reward-offering bidding for individual bid winner. And the translation tasks including English-Chinese and French-Chinese. The awards were 500 yuan and 1000 yuan respectively. Since the study does not consider the effect of reward amount and task types on solvers successful bids, 1279 valid task data were obtained by retaining 500 yuan reward amount of English-Chinese translation tasks in 1282 samples. These tasks attracted 3367 solvers, excluding those who participated in crowdsourcing competitions once or never. Finally, the effective sample of 5896 solvers' participation behavior was obtained for hypothesis tests.

3.2 Variable Description and Measurement

According to methods of measuring correlation variables in references Shao et al.[11], Gefen et al.[18] and Bockstedt et al.[19], the variables mentioned are measured as follows:
Dependent variable: Solvers' successful bid is the dummy variable. The successful bid is marked as 1 and the unsuccessful bid is marked as 0.
Independent variable: 1) The number of engagements refers to the number of tasks that solvers have participated in since the registration on the platform[19]. 2) The number of successful bids refers to the number of tasks won by the solver from registration to date[18]. 3) The last successful bid is marked as 1, the last unsuccessful bid as 0[9]. 4) Based on the research of Bockstedt et al.[19] and Guo et al.[20], the task-fit capability is measured by the number of successful bids and the number of participating bids. 5) According to the research of Khasraghi and Aghaie[9], the interval between the last participation and present participation is seen as proximate causes of participation. 6) The ratio of submission order equals submission serial number/the amount of submissions. 7) The task difficulty is the ratio of the number of participants and the number of views[11]. In addition, due to the ratio number is not convenient for calculating, so the processing data is expanded by 100 times. Then the task difficulty coefficient=(100*submitted works)/task followers. 8) Task density is the same with Shao et al.[11] used 10 days before and after the task launch as a competitive unit to measure the task's competition density. In order to obtain more samples of task density, the total number of tasks in 3 days before and after task release is used to represent the competitive density of tasks.
Control variables in this paper include not only the controlled amount of reward and types of task during sample selection, but also the absolute submission order of solvers. Results of the variables' correlation analysis are shown in Table 1.
Table 1 Descriptive statistics and correlation coefficient matrix
Variables 1 2 3 4 5 6 7 8 9 10
1.Successful bid 1
2.Participated times 0.26 1
3.Bid times 0.39 0.84 1
4.Submission order -0.14 -0.16 -0.15 1
5.Winning rate 0.57 0.42 0.66 -0.15 1
6.Last bid 0.17 -0.08 0.04 -0.07 0.40 1
7.Proximate cause 0.21 0.59 0.53 -0.12 0.33 -0.05 1
8.Submission order ratio 0.02 -0.10 -0.09 0.49 -0.08 -0.03 -0.02 1
9.Task difficulty -0.18 -0.23 -0.22 0.52 -0.18 -0.19 -0.26 0.12 1
10.Task density 0.08 0.07 0.05 -0.10 0.07 0.01 0.09 0.08 -0.23 1
Average 0.20 17.44 4.35 5.29 0.14 0.12 159.6 0.51 0.13 20.51
Standard deviation 0.40 23.32 8.51 5.43 0.20 0.32 1190.0 0.29 0.09 9.93
t(p < 0.1) 17.33 23.81 -13.08 -15.78 5.64 40.03 16.08 10.73 -1.79
Note: The amount corresponding to the variables in the table was Pearson correlation coefficient, and the coefficient (absolute value > 0.019) was significant at the level of p<0.01, via two-sided test.
In Table 1, the number of participation and the number of successful bids are highly correlated. In order to further verify the existence of collinearity, we test the "t" of each variable and found that there are significant differences among variables, so there is no collinearity.

4 Hypotheses Testing and Results Analysis

Since the dependent variable is a dummy variable, hierarchical logistic regression model is used to test the hypotheses. In order to eliminate the dimensional difference between different variables, the influence of its own variation and numerical fluctuation, the variable data is firstly processed centralized. Then, input control variables, independent variables and interaction terms successively and carry out logistic regression analysis on successful bidders. The model is shown in Table 2:
Table 2 Results of regression analysis
Variables β Mode1 Mode2 Model3 Model4 Model5 Model6
Constant term β1 -1.836*** -1.683** -1.826*** -1.827*** -1.842*** -1.680
Submission order -0.080** -0.080 -0.084** -0.111** -0.077** -0.119**
Experiences Participated time β1 0.016*** -0.002 0.015*** 0.016*** 0.015*** 0.002
Last bid β2 -0.876*** -0.831*** -0.880*** -0.873*** -0.877*** -0.835**
Bid times β3 -0.066** 0.026 -0.063** -0.065** -0.066** -0.022
Task-fit Capacity β4 9.217* 8.259* 9.219** 9.184** 9.219** 8.293**
Strategies Proximate causes β5 1.643 1.406 2.079 1.566 1.842 2.011
Submission order ratio β6 2.629** 2.572*** 2.183** 2.742*** 2.699** 2.372***
Submission order ratio2 β62 -1.684** -1.621* -1.553*** -1.438** -1.803*** -1.354**
Task Attributes Task difficulty β7 -1.331* -1.367* -1.382* -0.190 -1.452* -0.018
Task density β8 0.008* 0.008** 0.007* 0.008* 0.007* 0.009**
Winning times* Fitting capacity β9 -0.152** -0.138***
Task-fit capability* Submission order ratio β10 2.973*** 3.007**
Task difficulty* Submission order ratio β11 5.789** 7.317***
Task density* Submission order ratio β12 0.022** 0.032**
Log-likelihood -1996.4 -1995.3 -1988.7 -1993.1 -1995.2 -1977.0
χ2df, p < 0.001) 27.441 (8) 26.118 (8) 30.683 (8) 27.644 (8) 25.783 (8) 24.736 (8)
Adjusted R2 0.444 0.444 0.447 0.445 0.445 0.451
N 5896 5896 5896 5896 5896 5896
Note: p<0.1,p<0.05,p<0.01, via two-sided test.

4.1 Analysis on Solvers' Participation Experience Effects on Bidding Success

Regression results in Table 2 show that solvers' participation times had a significantly positive impact on successful bids (β1=0.016,p<0.01). The more participation times, the more experience solvers got, and the more accurate solvers judgments on task background and required skills. Therefore, the probability of winning the next bid is higher for solvers with more participation times. Therefore, H1a is verified. Bayus[16] believed that, due to the existence of fixed cognitive effects, the chance of winning the next bid decreased for the winner. The regression results also show that winners' last bid (β2=0.876,p<0.01) and bid times (β3=0.066,p<0.01) had a negative impact on the current bid, thus proving H1b and H1c. In addition, as one of corporate brand publicity methods, Tripadvisor tried to disperse the successful bidders in the bidding process, thus making more and more people heard of it. As a result, those who had won more often have less chance of winning again. This action not only motivated the continuous participation of other solvers, but also realized the wide publicity of enterprises' services.

4.2 Analysis on Solvers' Task-Fit Capacity Effects on Bidding Success

Regression results of model 1 show that solvers' competence-task-fit capacity have a significant impact on successful bids (β4=9.217,p<0.01). The higher the task-fit capacity the solver has, the easier it was to solve suitable tasks among many others. With higher task-fit capacity, solvers can submit the best solution that meets the needs of publishers, thus increasing the probability of winning the bid. H2 is proved.

4.3 Analysis on Participation Strategies Effects on Solvers' Bidding Success

Results of Table 2 show that proximate causes of participation have no significant influence on solvers bidding success (β5=0,p>0.1), so H3a had not been proved. The influence coefficients of submission order and the square of submission order on the successful bidding rate are just the opposite, and they are β6=2.629 (p<0.01) and β62=1.684 (p<0.01). This indicates that the submission order and the successful bidding rate are in an inverted u-shaped relationship, proving assumption H3b. In order to win the bid, solvers must submit high-quality solutions. Therefore, a certain amount of time must be spent and the submission order of high-quality solutions should not be too early. However, once a better solution is found, the publisher will accept the solution, resulting in the loss of bidding opportunities for subsequent high-quality solutions[16]. That is to say, in order to win the bid, solvers should not only need to pay a certain amount of effort to make a high-quality plan, but also grasp the 'first mover advantage', which is submitting the solution as soon as possible.

4.4 Analysis on Task Attributes Effects on Solvers' Bidding Success

Regression results show that the influence coefficient of the task difficulty on solvers' bidding success is β7=1.331 (p<0.1), which has a significantly negative impact, that is, the greater the difficulty is, the lower the probability of solvers' successful bids. H4a is proved. The task density has a significantly positive effect on bidding success βs=0.008 (p<0.1), indicating that the higher the task competition intensity is, the higher the successful bid rate is, and H4b is then proved. The high task density reduced the competition among solvers and increased the probability of successful bids.

4.5 Moderating Effect Analyzing

Model 2 describes the moderating effect of the number of solvers' successful bids. Results show that the number of successful bids has a negatively affect regulated with the relationship between the task-fit capability and successful bids (β9=0.152,p<0.001). From Figure 2(a), it can be seen that the probability of winning the bid is higher if the bidder has less successful bids. When solvers with low successful bidding rate improve their task-fit capacities, the solvers' winning probability increases. Model 3 describes moderating effects of task-fit capability. Results show that the task-fit capability has a positive moderating effect on the order of submission and the winning probability (β10=2.973,p<0.001). The inverted u-shaped relationship between the order of submission and successful bids remains the same (β6=2.183,p<0.001;β62=1.553,p<0.01). H6 is proved. With the same submission order, the probability of winning increases with solvers' improving task-fit capabilities.
Figure 2 Moderating effects of the number of successful bids and the task-fit capacity

Full size|PPT slide

From model 4, we can see that the cross-term coefficient between the task difficulty and order of submission is β7=5.789 (p<0.01), which shows that the task difficulty has a positive moderating effect on the relationship between the submission order and solvers' successful bids. Moreover, the regression coefficients of the submission order and submission order 2 are β4=2.742 (p<0.01) and β42=1.438 (p<0.05) respectively, which showed that the inverse u-shaped relationship still existed between the order of submission and the probability of winning. Since the cross term coefficient is greater than zero, the task difficulty can enhance the inverted u-shaped relationship between the submission order and successful bids. That is, the higher the task difficulty, the more the peak value of the inverted u-shaped relationship between the submission order and the successful bid moves to the right. Accordingly, it also means the optimal order of submission (see Figure 3a) will be larger. H7a is proved. Similarly, the cross-term coefficient β8=80.022(p<0.05) between the task density and the submission order shows that the task density has a positive effect on the relationship between the submission order and solvers' successful bidding.
Figure 3 Moderating effects of the task difficulty and task density

Full size|PPT slide

As the task becomes more difficult, it takes more time to complete the task. The more effort the participants make, the more likely he or she would win. As it is showed in Figure 3a, the successful bids rate curve increases while the task becoming more difficult. And the peak is gradually moving to the right. Figure 3b shows that as the task density increases, the probability of winning increases. However, the effect on the submission order is not obvious. Compared A with B in Figure 3, moderating effects of the task difficulty are more obvious than that of task density. To sum up, regression results for all models in this paper can be shown in Figure 4.
Figure 4 Results of regression analysis
Note: "+" indicates positive effects, "-"indicates negative effects, n.s indicates the insignificant, *p < 0.1,   * *p < 0.05,   * * *p < 0.01, via two-sided test.

Full size|PPT slide

5 Conclusions and Inspiration

Analyzes of effects of solvers' participation experience, task-fit capability, participation strategy and task attributes on the successful bids of solvers are made, based on the data from Tripadvisor translation tasks. Finding that: 1) The participation experience has both positive and negative effects on successful bidders, and the frequency of participation increases the probability of solvers successful bids because it strengthen solvers' experience. The number of successful bids and the last successful bids restrict the innovation of solvers because of fixed cognitive effects, and then reduce the probability of successful bids. 2) The task-fit capability has a positive effect on successful bids. The higher the task-fit capability is, the higher the probability of successful bids is. 3) There is an inverse u-shaped relationship between the order of submission and successful bids, which indicates that there is an optimal order of submission and the result is same as Bockstedt, et al.[19]. However, proximate causes of participation have no optimal effect on successful bids. 4) Task attributes not only influence bidding success, but also moderates the submission order of solvers. The task density has a positive effect on successful bids, and the task difficulty has a negative effect on successful bids; both task difficulty and task density have positive moderating effects on the order of submission and successful bids. 5) Although the number of successful bids will enrich solvers' experience, the number of successful bids will negatively regulate the relationship between solvers' task-fit capability and successful bids, due to the influence of fixed cognitive effects and the number of publishers; however, solvers' task-fit capability has a positive moderating effect on the relationship between the order of submission and successful bids, that is, with in the same order of submission, the stronger the task-fit capability of solvers, the higher the probability of solvers' successful bids.
Through results above, managerial inspirations can be obtained as follows: 1) Solvers should participate in crowdsourcing tasks, especially in different types, in order to enrich the participation experience and increase the probability of successful bids; in addition, although the number of successful bids and the last successful bid have a negative impact on the chance of solvers' successful bids, solvers can eliminate this fixed cognitive effects through communication and publishers' feedback. 2) The ability gained in participating --- Task-fit capability can increase the possibility of solvers' successful bids; therefore, solvers should not only sum up the experience and improve their capacities, but also quickly figure out skills required for crowdsourcing tasks so as to increase task-fit capabilities. 3) While submitting solutions, solvers should pay attention to submission strategies, that is, determine the submission order according to the number of solvers. According to Figure 2b, Figure 3a and Figure 3b, the best order is around 38% of the total number of submissions. 4) Solvers should fully understand the task difficulty and density of similar tasks on the current platform while participating. When the task is difficult, solvers should carefully measure their capacities and time available, thus ensuring the quality of submitted tasks. Otherwise, solvers will fail because the submission quality fails to meet publishers' requirements; when the task density is relatively high, the competitiveness of each task will reduce, and solvers should participate in crowdsourcing competitions, thus increasing the possibility of successful bids.

References

1
Howe J. The rise of crowdsourcing. Wired magazine, 2006, 14 (6): 1- 4.
2
Afuah A, Tucci C L. Crowdsourcing as a solution to distant search. Academy of Management Review, 2012, 37 (3): 355- 375.
3
Blohm I, Leimeister J M, Krcmar H. Crowdsourcing: How to benefit from (too) many great ideas. MIS Quarterly Executive, 2013, 12 (4): 199- 211.
4
Geiger D, Schader M. Personalized task recommendation in crowdsourcing information systems - Current state of the art. Decision Support Systems, 2014, 65, 3- 16.
5
Lüttgens D, Pollok P, Antons D, et al. Wisdom of the crowd and capabilities of a few: Internal success factors of crowdsourcing for innovation. Journal of Business Economics, 2014, 84 (3): 339- 374.
6
Eickhoff C, de Vries A P. Increasing cheat robustness of crowdsourcing tasks. Information Retrieval, 2013, 16 (2): 121- 137.
7
Terwiesch C, Xu Y. Innovation contests, open innovation, and multiagent problem solving. Management Science, 2008, 54 (9): 1529- 1543.
8
Barnes S A, Green A, de Hoyos M. Crowdsourcing and work: Individual factors and circumstances influencing employability. New Technology, Work and Employment, 2015, 30 (1): 16- 31.
9
Khasraghi H J, Aghaie A. Crowdsourcing contests: Understanding the effect of competitors' participation history on their performance. Behaviour & Information Technology, 2014, 33 (12): 1383- 1395.
10
Mahr D, Rindfleisch A, Slotegraaf R J. Enhancing crowdsourcing success: The role of creative and deliberate problem-solving styles. Customer Needs and Solutions, 2015, 2 (3): 209- 221.
11
Shao B, Shi L, Xu B, et al. Factors affecting participation of solvers in crowdsourcing: An empirical study from China. Electronic Markets, 2012, 22 (2): 73- 82.
12
Goodman J K, Paolacci G. Crowdsourcing consumer research. Journal of Consumer Research, 2017, 44 (1): 196- 210.
13
Yang Y, Chen P Y, Banker R. Impact of past performance and strategic bidding on winner determination of open innovation contest. Workshop on Information Systems and Economics (WISE 2010), 2010, 2010, 11- 20.
14
Kohler T, Chesbrough H. From collaborative community to competitive market: The quest to build a crowdsourcing platform for social innovation. R & D Management, 2019, 49 (3): 356- 368.
15
Sun Y, Fang Y, Lim K H. Understanding sustained participation in transactional virtual communities. Decision Support Systems, 2012, 53 (1): 12- 22.
16
Bayus B L. Crowdsourcing new product ideas over time: An analysis of the Dell IdeaStorm community. Management Science, 2013, 59 (1): 226- 244.
17
Yang Y, Chen P Y, Pavlou P. Open innovation: An empirical study of online contests. ICIS 2009 Proceedings, 2009, 13- 20.
18
Gefen D, Gefen G, Carmel E. How project description length and expected duration affect bidding and project success in crowdsourcing software development. Journal of Systems and Software, 2016, 116, 75- 84.
19
Bockstedt J, Mishra A, Druehl C. Do participation strategy and experience impact the likelihood of winning in unblind innovation contests?. Social Science Electronic Publishing, 2011, 89 (3): 370- 371.
20
Guo W, Straub D, Zhang P. Understanding vendor selection in crowdsourcing marketplace: A matter of vendor-task fit and swift trust, 2014: 1-17. https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1687&context=amcis2014.
21
Hogarth R M, Einhorn H J. Order effects in belief updating: The belief-adjustment model. Cognitive Psychology, 1992, 24 (1): 1- 55.
22
Pennington N, Hastie R. Evidence evaluation in complex decision making. Journal of Personality and Social Psychology, 1986, 51 (2): 242- 258.
23
Yang J, Adamic L A, Ackerman M S. Crowdsourcing and knowledge sharing: Strategic user behavior on taskcn. Proceedings of the 9th ACM conference on Electronic Commerce, ACM, 2008, 246- 255.
24
Liu T X, Yang J, Adamic L A, et al. Crowdsourcing with all-pay auctions: A field experiment on taskcn. Management Science, 2014, 60 (8): 2020- 2037.
25
Morschheuser B, Hamari J, Maedche A. Cooperation or competition-when do people contribute more? A field experiment on gamification of crowdsourcing. International Journal of Human-Computer Studies, 2019, 127, 7- 24.
26
Brabham D C. Motivations for participation in a crowdsourcing application to improve public engagement in transit planning. Journal of Applied Communication Research, 2012, 40 (3): 307- 328.
27
Zheng H, Li D, Hou W. Task design, motivation, and participation in crowdsourcing contests. International Journal of Electronic Commerce, 2011, 15 (4): 57- 88.
28
Huang Y, Vir Singh P, Srinivasan K. Crowdsourcing new product ideas under consumer learning. Management Science, 2014, 60 (9): 2138- 2159.

Acknowledgements

The authors gratefully acknowledge the editor and two anonymous referees for their insightful comments and helpful suggestions that led to a marked improvement of the article.

Funding

National Social Science Foundation of China(17CGL019)
PDF(276 KB)

258

Accesses

0

Citation

Detail

Sections
Recommended

/