PyIBL is a Python implementation of a subset of Instance Based Learning Theory (IBLT). It is made and distributed by the Dynamic Decision Making Laboratory of 一本道无码 for making computational cognitive models supporting research in how people make decisions in dynamic environments.
Typically PyIBL is used by creating an experimental framework in the Python programming language, which uses one or more PyIBL Agent objects. The framework then asks these agents to make decisions, and primaryrms the agents of the results of those decisions. The framework, for example, may be strictly algorithmic, may interact with human subjects, or may be embedded in a web site.
PyIBL is a library, or module, of code, useful for creating Python programs; it is not a stand alone application. Some knowledge of Python programming is essential for using it.
PyIBL is an ongoing project, and has been started with a small subset of IBLT. As it evolves it is expected that more and more of the IBLT will be encorporated into it. For example, PyIBL does not currently support similarity, spreading activation or deferred feedback, but those are planned for the near future. PyIBL is still sufficiently early in its development that future versions are likely to radically change some of the APIs exposed today, and some effort may be required to upgrade projects using the current version of PyIBL to a later version.
Implemented by: Don Morrison
SpeedyIBL is an implementation of Instance Based Learning (IBL) models, a Python library that allows to create single or multiple IBL agents with fast processing and response time without compromising the performance compared to the traditional implementation of IBL models. SpeedyIBL is made and distributed by the Dynamic Decision Making Laboratory of 一本道无码.
More details about installing and using SpeedyIBL to create IBL agents that can do a wide range of decision games such as Binary Choice, Insider Attack, Minimap, Ms. Pac-man, Fireman, and Cooperative Navigation can be found in Download/Documentation.
Implemented by: Nhat Phan and Ngoc Nguyen
Shiny-IBL is our most recent attempt to make an IBL model useful to researchers and students of behavioral science.
Shiny-IBL uses the R package for generating a web application written primarily in the R language. Shiny-IBL offers a complementary way to communicate the complexity of dynamics emerging from the simple IBL model of binary choice.
The main insight from Shiny-IBL is that cognitive modelers should go beyond the explanation of concepts that often need technical expertise and skills, and provide hands-on experiences to demonstrate and communicate the complex insights from their models without the need of additional skills. These interactive tools could also be research tools in their own right. Shiny-IBL could be used to discover a set of inputs that may produce model outputs shedding light to surprising aspects of human behavior, and to help researchers understand the reasons behind.
Implemented by: Jeffrey Chrabaszcz and Erin Bugbee
The Instance-Based Learning Tool (IBLTool) is an effort by Dynamic Decision Making Laboratory to formalize the theoretical approach to modeling. The goals are to have the Instance-Based Learning Theory be:
The tool is a graphical interface written in Visual Basic that uses sockets to communicate with various tasks.
System requirements:
Implemented by: Ripta Pasay and Varun Dutt
This is an R implementation of the IBL model for repeated choice tasks. This is made available for researchers who can then customize the code for other tasks and choice problems. This model is based on descriptions from Lejarraga, Dutt, & Gonzalez 2012 and in Gonzalez & Dutt, 2011.
Implemented by: Emmanouil Konstantinidis
This is a Matlab implementation of the IBL model for binary-choice which was developed and customized for running the problems in the Technion Prediction Tournament data set. This is made available for researchers who can then customize the code for other tasks and choice problems. This is the model presented in Lejarraga, Dutt, & Gonzalez 2012 and in Gonzalez & Dutt, 2011.
Implemented by: Varun Dutt
This is a simple verison of an Instance Based Learning (IBL) model implemented in Microsoft Excel. It is made and distributed by the Dynamic Decision Making Laboratory of 一本道无码 to generate the generality of the learning process in multiple tasks. This model is presented in Lejarraga, Dutt, & Gonzalez, 2012.
Implemented by: Katja Mehlhorn
This is an interactive goal-seeking task in an environment called "gridworld". To do this task, a player needs to make sequential decisions about going up, down, left or right to explore the space. There are four colored targets that give the player points. The goal is to find the target with the highest value. In addition, the player is penalized 1 point for each decision made (i.e., a movement cost) and 5 points for walking into an obstacle.
Implemented by: Ngoc Nguyen
The One-Step Gridworld is the same task as the goal-seeking Gridworld task. However, the difficulty in One-Step Gridworld is that at each step the decision maker is only informed of their current position, the number of steps already taken, and the cost or benefit of the step taken as illustrated in the figure below.
Implemented by: Ngoc Nguyen
This is an interactive "search and rescue" task in which the player will explore the rooms in a building to find victims and rescue them. In the building there are two types of victims: the Green victims are less injured, and they will give the player 10 points, if rescued. The Yellow victims are more injured, and they will give the player 30 points, if rescued. The Yellow victims die after 4 minutes: they will disappear from the building, and the Green victims will not die during the mission.
Implemented by: Ngoc Nguyen
In this interactive cyber-defense task, you have been contracted to defend the computer network of a major company at one of their manufacturing plants. Attackers are trying to get access to the Operational Server to steal the plans and disrupt the production. Your goal, as a security expert, is to minimize your loss by blocking the attacker鈥檚 intrusion. Each round, you will need to choose between 4 actions to prevent the attacker to progress in the network.
Implemented by: Baptiste Prebot, Yinuo Du, and Tony Xi.
In this activity, you will make simple choices between an 鈥淥ption A鈥 and 鈥淥ption B.鈥 Each option will either give you 0 or 500 points. There will be 50 of these decisions. There are three versions of this activity you can try. Demo 1, Demo 2, and Demo 3 each vary in the way the probabilities change over time. Could you guess how?
Implemented by: Orsi Kovacs
In this treasure hunting game, you will attempt to find treasure in one of two boxes. But it will not be easy! A defender (a computer bot) will attempt to keep you from finding the right box. The defender can defend a box, but only one at a time. To help the defender鈥檚 chances, it may also send you alerts in regards to the box鈥檚 protection status. Sometimes these messages will be true, other times not. Will you succeed?
Implemented by: Orsi Kovacs
This is our version of the classic Rock, Paper, Scissors game. In twenty rounds, you will make choices and receive points depending on if you win, lose, or tie. Keep the payouts in mind when making your choices!
Implemented by: Frederic Moisan, Updated by: Don Morrison
The DCCS was inspired by generic dynamic stocks and flows tasks, and based on a simplified and adapted climate model. The DCCS interface epresents a single stock or accumulation of CO2 in the form of an orange-color liquid in a tank. Deforestation and fossil fuel CO2 emissions, are represented by a pipe connected to the tank, that increase the level of CO2 stock; and CO2 absorptions, also represented as a pipe on the right of the tank, which decreases the level of CO2 stock.
The absorptions are outside the direct control of the participant. The absorptions depend on the change in concentration of the CO2 stock and the rate of CO2 transfer parameter derived from the models in the previous section.
Implemented by: Varun Dutt
WPP is a resource allocation and scheduling task. It simulates a water distribution system with 23 tanks arranged in a tree structure and connected with pipes. The goal is to go through the purification process on time before the deadlines expire. Decision makers must manage a limited number of resources (only 5 of the 46 pumps available can be active at a given time) over time to accomplish this goal.
With minor modifications, WPP has been used extensively to study automaticity development, stuation awareness, learning, and adaptation. In addition, we developed an ACT-R cognitive model that reproduces human learning in this task.
Implemented by: Cleotilde Gonzalez
DSF is a generic representation of the basic building blocks of every dynamic system: a single stock that represents accumulation; inflows, which increase the level of stock; and outflows, which decrease the level of stock. A user must maintain the stock at a particular level or, at least, within an acceptable range by contrarresting the effects of the environmental flows.
Implemented by: Verun Dutt and Cleotilde Gonzalez
We have created a learning environment that provides users with an interactive experience of supply-chain management. The supply-chain consists of a single retailer who supplies beer to the consumers (simulated as an external demand function), a single wholesaler who supplies beer to the retailer, a distributor who supplies the wholesaler, and a factory that brews the beer (obtaining it from an inexhaustible external supply) and supplies the distributor.
Beer game is used extensively to study the way decision makers perform when confronted by dynamic complexity (Sterman, 2004). We use the beer game to study learning and adaptation in DDM. In addition, we have developed an ACT-R cognitive model that reproduces initial data collected on human learning (Martin, Gonzalez, & Lebiere, 2004).
Implemented by: Verun Dutt
MEDIC is an interactive tool involving presentation of symptoms, generation of diagnoses, tests of diagnoses, treatments, and outcome feedback. The simulation begins with a patient complaining of symptoms. Each patient has a different initial health level, which continues to fluctuate downward until the patient has been treated or until the patient dies. Since each patient is randomly assigned one of many fictitious diseases, participants must test for the presence of symptoms, each having a different probability of being associated with one of the diseases. Each test returns with a definitive diagnosis (absent or present) after a predetermined time delay for the test to run. The participant provides an assessment of the probability of the presence of the disease. Then, participants can either conduct more tests or administer a treatment. Feedback comprises the actual disease present, the disease the participant believed was present, and a score that represents their accuracy throughout the task.
Implemented by: Jack Lim
Firechief is a resource allocation task in which the participant's goal is to minimize the damage caused by the fire to landscape. There are a limited number of the two appliance types (helicopters and firetrucks) used to extinguish fires.
The screenshot shows the fire having already consumed a certain percentage of the landscape (black areas of the screen) and continuing to spread (red dots on the screen). The score is the percentage of landscape that has not yet been consumed by fire. The simulation takes place in real-time and is highly dynamic, the environment changing autonomously as well as depending on the actions of the participant. Resources are limited as the appliances eventually run out of water and may need to be refilled, thus causing a delay in their usage. The player needs to learn how and where to concentrate resources in order to save as much of the landscape as possible.