Work in progress

Human-algorithm interaction in inventory planning: algorithmic input versus output adjustments (joint work with Enno Siemsen)

Many operational decisions increasingly rely on the guidance of (AI-based) algorithms. However, managers or planners remain responsible for decision outcomes, and hence often have the discretion to adjust algorithmic recommendations. Humans often have intuition (tacit knowledge) or superior information about a decision problem but also are imperfect decision makers and may reduce performance when they alter algorithmic recommendations. What is the most appropriate organizational structure to combine private human knowledge and algorithmic optimization? In this study we look at a single-period inventory planning problem, and compare allowing the human decision maker to adjust the forecast (algorithmic input) versus the order quantity (algorithmic output). We find that when subjects are asked to adjust algorithmic orders (versus forecasts) they make better decisions and enjoy higher profits when the value of human’s private information is large. In fact subjects set lower quantities than what the algorithm suggests (make on average downwards adjustments) but they also make better use of their private information (i.e., the correlation between own information and adjustment is higher).

Supply Chain Planning Decisions and AI recommendations (joint work with Lijia Tan and Willem van Jaarsveld)

Sophisticated AI algorithms are often employed in practice to help supply chain planners make decisions (e.g., forecasting, ordering, production). How are these algorithmic suggestions used by humans and do decision makers learn from AI tools? We look at a complex supply chain decision setting with uncertainties, delays, and interrelated decisions. We conduct incentivized lab experiments where decision makers have a sophisticated AI tool avaible, based on a neural network algorithm, which can significantly improve operational outcomes. Decision makers largely use the black-box algorithmic suggestions but in most of the cases modify them. Trust in AI recommendations depends on the type of decision (i.e., order, assembly, transport), the decision maker’s task experience, and their general attitude towards AI tools.

Regulatory Focus and Emotions in Supply Chain Coordination (joint work with Santiago Kraiselburd, Konstantina Tzini and Priscilla Rodriguez)

Regulatory focus orientation affects people’s emotions, thoughts and actions. We conjecture that beyond an individual’s regulatory fit to a task, the fit between decision makers in a supply chain may also play a role in achieving coordination. We study a repetitive capacity matching game under information asymmetry and information sharing (Hyndman, Kraiselburd and Watson, 2013) under different profit scenarios and observe how regulatory fit, emotions and decisions interact.

Service-level agreements and supplier capacity decisions: the impact of incentive framing on trust (joint work Wendy van der Valk)

The goal of this project is to study the effect of incentive framing on the development of trust in supply chains and shed light on the underlying mechanism (e.g., violation of expectations, attributions of benevolence and emotions). We focus on the framing of service-level-agreements (bonus versus reward), and study their effect on trust development in the context of demand information sharing between a buyer and a supplier who sets capacity.