Date of Completion

7-20-2017

Embargo Period

7-20-2017

Keywords

Crowdsourcing, Crowdfunding, Prediction Market, Behavioral Economics

Major Advisor

Jan Stallaert

Associate Advisor

Sulin Ba

Associate Advisor

Xinxin Li

Field of Study

Business Administration

Degree

Doctor of Philosophy

Open Access

Campus Access

Abstract

This dissertation consists of three essays studying three different kinds of online communities – namely online crowds – that help firms to solve business problems.

The first essay investigates prediction markets, online market exchanges for predicting events such as election results and new product sales. We ask whether prediction markets provide consistent and fallacy-free forecasts. We find that prices in the Presidential Election markets are occasionally prone to fallacies, and the prices from these markets cannot always be interpreted as accurate and noiseless representations of probabilities for predicted events. Our results suggest that the information gathered from prediction markets must be interpreted with caution.

The second essay examines the contributing behavior of online backers on reward-based crowdfunding (RBC) platforms. RBC platforms allow the public to pledge money to a project and receive a reward. This paper focuses on backers who pledge more money than is required to claim a reward. We find that backers tend to contribute less when the project has already received a high amount of funding. However, backers tend to contribute more when the project is supported by a high number of backers. In addition, we show that herding may occur among the backers who only pledge money without claiming any rewards. These results help us understand more about contributing behaviors among online backers on RBC platforms.

Crowdsourcing is a way to outsource a business task to an online community. The third essay investigates how the feedback information on a crowdsourcing platform and the systematic bias of crowdsourcing workers can affect outcomes. Specifically, we examine the role of a systematic bias – namely the salience bias – in influencing crowdsourcing platform performance. Our results suggest that the salience bias influences the performance of contestants, including the winners of the contests. Furthermore, we show that the number of crowdsourcing contestants moderates the impact of the salience bias due to the parallel path effect and competition effect. These results provide important implications for crowdsourcing firms and platform designers.

Share

COinS