Research Projects
Meet our researchers and see the cutting edge Artificial Intelligence applications being developed at Stevens Institute of Technology.
The projects below showcase the breadth and depth of faculty expertise within our cross-disciplinary research group, the Stevens Institute for Artificial Intelligence. They are a sampling of the types of AI-related research being conducted by our cutting-edge researchers at Stevens with potential business and technology applications.
AI Research Demonstrations
Here are brief descriptions of some of the demonstrations and exhibits of the artificial intelligence applications being developed at Stevens, listed by research clusters within the SIAI:
Art and Music
Kelland Thomas & Donya Quick
Researchers
Composition by conversation (CbC) is a scenario where a human and machine collaboratively compose music primarily through the use of natural language commands. This task involves modeling natural language for musical concepts, developing computational models of music that facilitate integration with natural language processing algorithms, querying a score for musical features based on partial specifications of concepts, and generating new music based on user descriptions. CbC also involves language synthesis from low-level conceptual representations, so that the computer’s response when confirming its understanding of a command is not simply a parroting-back of the user’s input. In addition to being a tool for computer-assisted music creation, CbC also has utility for music education, allowing questions to be asked about the score and teaching its users a correct musical vocabulary. This demo will illustrate the current state of the CbC user interface for creating and editing short musical scores with a chat-style interface. CbC is part of the MUSICA project under the DARPA Communicating with Computers program.
Kelland Thomas & Donya Quick
Researchers
Trading Fours is an interaction between two jazz musicians who exchange short solos. Communication of musical ideas takes place much like a melodic conversation, and musicians will echo each other's ideas in new contexts while simultaneously incorporating novel material. This demo will illustrate these kinds of interactions in the context of a human musician and a computer. The computer can also provide accompaniment in the form of additional artificial musicians creating harmony and bass parts, allowing a more band-like experience for the musician. Problems for the computer in this domain involve pattern detection (locating musical motifs), pattern-based generation, and making stylistically correct decisions - all while under real-time constraints. Trading Fours is part of the MUSICA project under the DARPA Communicating with Computers program.
Cognitive Computing and Communications
K.P. Subbalakshmi
Researcher, School of Engineering and Science
Presenters: Ning Wang and Cengis Hasan, School of Engineering & Science
Advances in mobile networks, web technologies and popularity of smart phones have led to the unprecedented growth of wireless data traffic. Computationally heavy applications that also produce large data, like real-time visual information reporting, video-intensive games and computer vision-based applications, are now available on resource-constrained mobile devices. This uptick in sophisticated mobile applications has not only increased the computational demand on the end device itself, but has also created an increasing load on the carrier’s core networks.
To solve this problem, we consider the concept of spectrum aware cognitive computation management where all viable wireless interfaces of a multi-RAT enabled device are used for computation offloading. Our AI based solution will enable the system to judiciously select the best service rates for each radio interface while keeping in mind the application needs, including job completion deadlines, the inherent dependencies between the components, energy consumption, memory and communication costs. This multi-objective optimization is solved using deep reinforcement learning, and neural networks are used to learn an approximate coverage set of policies.
A real-time security video analysis application that monitors, detects, and identifies people and objects in confidential settings and sends alerts to smart phones when a potential threat has been identified will be demoed on this device. The multi-RAT device will adapt to network conditions and schedule the right components over the right device, and will transmit necessary data at appropriate levels over all interfaces.
Mukund Iyengar
Researcher, School of Engineering & Science
WiFi accounts for over 65% of all Internet access, and more than 90% of all traffic on the Internet is already video streaming. Despite the rapid advancements, home WiFi continues to experience video drops, sub-par network monitoring, and coverage issues in large homes.
The BlinkCDN team has re-imagined the WiF stack for superior video delivery, fine grained user control, and unsurpassed intelligent content delivery. We have re-written the WiFi stack from the ground up to make streaming services the primary traffic citizen. First, the Blink mesh intelligently routes packets in a home to affirm superior video quality, low latency and wall-to-wall coverage. Second, it delivers deep packet analysis to detect and fix problems much before they occur. And third, Blink gives home-owners and ISPs fine grained control over the WiFi experience by optimizing content delivery at a device and/or user level. An initial prototype is currently operational at Stevens. The Blink mission is to create "WiFi that delights".
Rooms for Humanity: Uniting Humanity on a Browser by Building Online Meeting Rooms
Mukund Iyengar
Researcher, School of Engineering & Science
With growing concerns about personal data collection and leaks, it is nearly impossible to have conversations with people on the Internet without leaving massive footprints. Even without the data leaks, it is simply not intuitive for a group of people to have a video-conference without experiencing serious compatibility issues. Skype, FaceTime, Zoom etc., are bulky, limited and most group web-conferencing applications are not free.Rooms For Humanity is a safe, easy and always-on meeting place for geographically dispersed participants. Launched in April 2018, this site has already hosted more than 700 conversations with usage in US, Europe, Asia and Africa. RFH vastly outperforms competition along these lines: (i) product is invisible; meaning no installs/downloads necessary, (ii) works on any platform with a browser and a camera, (iii) no sign-ups, login or password: we collect no data and leak no data, (iv) vastly superior video quality and ultra low latency, (v) end-to-end content encryption making it suitable even for financial applications, and (vi) very low operational costs: infrastructure-less peer-to-peer streams negate the need for expensive servers.
Cybersecurity
Shucheng Yu
Researcher, School of Engineering & Science
Bringing intelligence to IoT devices via deep learning is attractive to many smart systems. Due to hardware constraints, executing deep learning algorithms, even inferences directly on IoT devices, is often impractical especially for real-time applications. Offloading learning algorithms to edge devices or remote cloud is a promising solution to this problem, but data privacy shall be well preserved. Existing privacy-preserving deep learning algorithms are mostly based on techniques such as partial or fully homomorphic encryption, secure multiparty computation, or differential privacy. They usually exhibit trade-offs between efficiency and accuracy of the model/results.
In this project we design a novel protocol for privacy-preserving offloading of deep learning algorithms. Our protocol features extreme efficiency – the execution of the learning algorithms is as fast as the original learning algorithms in addition to slight communication delays. The new protocol does not incur any loss in model/result accuracy. We achieve this via our novel data encryption algorithm which is similar to one-time pad and provides information-theoretical security. Our experimental results on Raspberry Pi model A shows a 35x speed up (3.5 seconds vs. 125 seconds) for inference of AlexNet, as compared to local execution on the Raspberry Pi. Our solution saves 95% of energy consumption, which makes it promising for systems such as UAVs wherein battery life is critical.
Wendy Wang
Researcher, School of Engineering & Science
Deep learning based on artificial neural networks has been proven effective in many applications. However, massive data collection required for deep learning presents obvious privacy issues. A new deep learning model named collaborative deep learning (CDL) thus emerges. CDL enables multiple parties (e.g., users) to jointly learn an accurate neural network model for a given objective without sharing their input datasets. In particular, the participants train independently on their own datasets and selectively share small subsets of their models’ key parameters during training. Although CDL addresses the privacy issue, it raises a new concern of the integrity of the participating parties. The adversarial participants may send wrong local model parameters, aiming to pollute the global model.
In this project, we investigated the problem of how to verify if the participating parties indeed returned correct local model parameters efficiently. Our verification method is so lightweight that it is much cheaper than the execution of learning locally. In this exhibition, we present our progress and main research findings of this project.
K.P. Subbalakshmi
Researcher, School of Engineering and Science
Presenters: Mingxuan Chen and Djengis Hasan, School of Engineering & Science
Online social networks are extremely popular and known for being an expedient disseminator of information. This ease of information dissemination can be a double-edged sword as social networks can also be used to spread rumors, or computer malware. For instance, in 2013, a fake tweet originating from a hacked Associated Press Twitter account about bombings in the White House caused the Dow Jones Industrial Average to drop 145 points within 2 min. Clearly it is necessary to detect the sources of such misinformation for rapid damage control as well as to facilitate the design of sophisticated policies to prevent further viral spreading of misinformation through social networks in the future.
Since practical online social networks are vast, it is impossible to continuously monitor the entire social network. One method to deal with this problem is to observe only a subset of designated nodes (called sensors). Although this approach offers advantages, there is a possibility that some percentage of these sensor nodes may be unavailable at certain times. We address the problem of rumor source detection under these circumstances. We use generative adversarial and autoencoder neural network architectures to overcome the problem of missing information. This algorithm is then tested on datasets that contain Twitter conversation threads associated with different newsworthy events including the Ferguson unrest, the shooting at Charlie Hebdo, the shooting in Ottawa, the hostage situation in Sydney and the crash of a Germanwings plane.
Energy and Environment
Yongzhen Fan, Nan Chen, Wei Li , and Knut Stamnes
Researchers
Satellite remote sensing techniques are important for monitoring, detecting, and predicting the Earth environment. Our major research activity is to use satellite data from a variety of sensors to infer atmosphere, ocean, and land surface properties, such as: cloud coverage, aerosol loading, surface classification, air and water quality. By exploiting advanced AI techniques in conjunction with our comprehensive radiative transfer model, we have developed a powerful, new generation satellite remote sensing data analysis system.
Machine Learning based Cloud Screening
Our machine learning based cloud mask (CM) algorithm is far superior in complex environmental conditions compared to the traditional threshold-based methods.
Machine Learning based Ocean Color Algorithm
Our machine learning based ocean color algorithm better reveals algal bloom patterns in the ocean under sand storms.
Machine Learning based Snow Property Retrieval Algorithms
Our machine learning based snow algorithms provide improved monitoring of global warming effects.
Fintech
Forecasting and Risk Management in Energy Markets: A Review of Machine Learning Approaches
Hamed Ghoddusi and German Creamer
Researchers, School of Business
We critically review methods and findings of more than 130 articles published between 2005 and 2018 dedicated to Energy Economics applications of Machine Learning (ML). Our review identifies applications in areas such as predicting energy prices (e.g. crude oil, natural gas, and power), demand forecasting, volatility analysis, risk management, trading strategies, data processing, and analyzing macro/energy trends. Our analysis suggests that Support Vector Machine (SVM), Artificial Neural Network (ANN), and Genetic Algorithms are among the most popular techniques used in energy economics papers. We discuss achievements and limitations of the existing literature. The survey concludes by identifying current gaps offering some directions for future research.
Foundations of AI and Machine Learning
Amir H. Gandomi, Ph.D.
Researcher
Evolutionary computation (EC) has been widely used during the last two decades and has remained a highly-researched topic, especially for complex real-world problems. The EC techniques are a subset of artificial intelligence and their intelligence comes from biological systems or nature in general. The efficiency of EC is due to their significant ability to imitate the best features of nature which have evolved by natural selection over millions of years. The main theme of my project is about EC techniques for (big) data mining, and global optimization. On this basis, we expand and apply evolutionary computing in data mining and modeling and genetic programming, in particular, will be presented. As case studies, EC applied for response modeling of a complex and stochastic systems. In the other phase, the evolutionary optimization algorithms are expanded and applied to key optimization problems such as large-scale, black box, variable lengths and many objective problems. Some heuristics are also introduced and adapted with EC and which can significantly improve the optimization results.
For more information, please visit: http://gandomi.beacon-center.org
David A. Vaccari
Researcher, School of Engineering & Science
We are developing software, which we call TaylorFit, that can produce models with the prediction performance of Artificial Neural Networks (ANNs) with Multivariate Polynomial Regression (MPR) for Response Surface Analysis (RSA). In comparison to linear methods, MPR shares with ANNs the ability to produce models that are much more accurate and unbiased. But MPR models will be more useful than ANNs because they are transparent, tractable, and transportable, in addition to being more compact and relatively resistant to overfitting. Multivariate polynomials are much easier to manipulate, incorporate into other software, publish, plot, etc.
MPR models can also replace linear time-series modeling techniques such as ARMA (Box- Jenkins) models and can produce far superior results because they incorporate nonlinear effects. In these applications, they represent a form of what are known as Nonlinear Autoregressive Moving Average with eXogenous variables (NARMAX) models (although without the “MA” part).
TaylorFit runs client-side within a browser. We have made it available for free online at www.TaylorFit-RSA.com. Applications include business and finance, health and science, and almost any area where multivariate numerical data are collected with cause-and-effect relationships expected.
“Data is expensive, computations are cheap.” We recommend that those who have analyzed data using linear methods or ANNs should consider re-analyzing their data using MPR. TaylorFit software makes it easy to quickly explore the parameter space and examine many kinds of complex relationships. Potential users can go to www.TaylorFit-RSA.com to download the Users’ Manual and start fitting models to their data.
Gary Engler and Michael Zabarankin
Researchers
The 2014 Nobel Prize in Medicine and Physiology went to the discovery of place and grid cells in the hippocampus of mammals that form brain's GPS system (cognitive map)—humans and animals develop cognitive maps of environments to orient and navigate in space. Due to the vast difference in the energy requirements for the human brain versus computers designed to simulate the brain, the importance of extracting the computational principles the brain uses minus the biological redundancies could hardly be overestimated. Understanding how the brain solves the navigation problem in the context of the cognitive map is invaluable for gaining insight into those principles and, if successful, would result in efficient neuromorphic algorithms for solving various combinatorial optimization problems including but not limited to optimal path planning and optimal resource allocation. This project demonstrates an algorithm for constructing a network of stochastic spiking neurons to solve the shortest path problem for a given graph. The algorithm is believed to imitate a potential manner in which the hippocampus carries out path planning using grid and place cells.
Ihor Indyk and Michael Zabarankin
Researchers
Nowadays more and more industry sectors increasingly rely on machine learning techniques for various tasks ranging from spam filtering and credit card fraud detection to natural language processing and medical diagnosis. Performance of those techniques depends not only on corresponding mathematical algorithms but also and largely on quality of the training data. It has been well demonstrated that an object on a picture could be easily misclassified if the picture is edited in a way not noticeable to a naked eye. In a cyber warfare, this provides a fertile soil for an adversary attack, which can disrupt a number of vital automated processes by simply tampering their training data. This project develops strategies for countering/mitigating consequences of adversarial machine-learning attacks.
Health and Biomed
Negar Tavassolian
Researcher, School of Engineering & Science
Cardiovascular diseases (CVD) are one of the most significant public health concerns around the world. In 2015, more than 30% of global deaths were attributable to CVDs. It is well-known that home-based monitoring systems would hugely benefit the self-care and self-management of CVDs. Wearable sensors are considered as one of the most promising solutions for this purpose. Timely warnings and suggestions could be provided by analyzing the recordings from wearable monitors using artificial intelligence algorithms.
Cardio-mechanical sensing is a wearable sensing modality that has been actively researched in recent years by our group. It is defined as the measurement of heartbeat-induced chest wall vibrations by placing motion sensors on the chest wall of subjects. The recorded signals can be analyzed using a machine-learning framework. With the power of artificial intelligence, early detection of irregular heart activities corresponding to specific CVD conditions can be made possible. The critical response time to CVDs would be increased, and more lives could be saved.
Our group has recently developed a binary classification of cardiovascular abnormalities using the time-frequency features of cardio-mechanical signals. Experimental measurements were performed on patients with various kinds of cardiovascular diseases at Columbia University Medical Center. Control data were also collected from healthy subjects at Stevens Institute of Technology. Sensitivity and specificity of more than 98% were achieved using our proposed framework. The results indicate the promising future of wearable sensors along with artificial intelligence in the healthcare domain.
Our future work involves the integration of multiple sensing modalities including cardio-mechanical signals, electrocardiograms, and photoplethysmograms into the wearable platform for a more accurate and advanced analysis of cardiovascular activities. AI-assisted sensor fusion and machine-learning algorithms will be leveraged to extract the cardiovascular features. Specific disease detection and categorization will also be performed using our framework.
R. Chandramouli
Researcher
Presenter: Harish Sista, School of Engineering & Science
Alzheimer’s disease is a common type of dementia that causes problems with memory, thinking and behavior and is the 6th leading cause of death in the US today. The symptoms of this disease develop slowly but get worse over time, becoming severe enough to interfere with the patients’ daily tasks. Early and accurate diagnosis can save up to $7.9 Trillion in medical care and costs. With the theory that promising new drugs have a better chance of working if people begin taking them before the disease has already inflicted a heavy toll on cognition, early detection has become a high priority in the research of Alzheimer’s. The Montreal Cognitive Assessment (MOCA) test is the most prevalent screening tool recommended by the Alzheimers Association, because of its efficiency. This is a one page, 30 points test administered in 10 minutes with an assistance of a medical team. This demo features a AI-based iOS application called “CoCoA-Bot” that can be administered in the privacy of a home. This MOCA test application mainly concentrates on testing the users and rating their performance based on various cognitive domains like memory, visuospatial abilities, attention, concentration, language, orientation to time and place. Although the current application is made available in the iOS platform, plans are in place to launch it on Android and as a web-based application. Stop by this table to take the MOCA test on an iOS phone.
Ramana Vinjamuri
Researcher
Presenting: Marty Burns, School of Engineering & Science
The Hand Exoskeleton with Embedded Synergies (HEXOES) is a soft glove-based cable driven robotic exoskeleton. The system provides independent actuation of 10 joints of the hand using a remote actuator assembly and a lightweight hand component. The actuator assembly consists of 10 linear motors, embedded position and force sensors, and relevant electronics (microcontrollers, etc.) which enable actuation to result in movements similar to activities of daily living. The actuator assembly is connected to the hand component through a three foot long bundle of cable paths and sensor wires. The hand component, weighing 258g, actuates the metacarpophalangeal (MCP) and proximal interphalangeal (PIP) joints of each finger and thumb in flexion. Passive extension is provided by adjustable springs on the dorsal part of the hand. Design features allow individuals with various hand sizes to use the same exoskeleton effectively. Flex sensors placed over the actuated joints along with position and force sensors on the robot’s linear actuators enable closed-loop control. Biomimetic control mechanisms (patented) are embedded into on-board memories of microcontrollers.
R. Chandramouli and K.P. Subbalakshmi
Researchers
Presenters: Mingxuan Chen and Zongru (Doris) Shao, School of Engineering & Science
According to World Health Organization, approximately 50 million people live with dementia worldwide, and there are 10 million new cases every year. Alzheimer’s Disease is the most common form of dementia and may contribute to 60% - 70% of cases. The global societal economic cost of treating Alzheimer's patients is estimated to cross $1 trillion in 2018.
Detecting mental health impairments at an early stage can enhance patient care as well as improve chances of better management. However, it is a technically challenging problem. Current biomarker-based approaches for diagnosing early-stage cognitive disorders (CD) are complicated, intrusive, and expensive.
This work uses non-intrusive, statistical, machine learning-based linguistic analyses to detect signs of mental health disorders. Clinical datasets of patients with Aphasia, Alzheimer's, and dementia are used for experimental evaluation of the proposed approach.
Statistical analysis of linguistic features extracted from these datasets demonstrate the key-term position feature set representing concept transitions in conversations as a key indicator of cognitive disorders. A model stacking algorithm is proposed to combine several weak machine learning models and feature sets. Tests on the datasets show that our algorithm has accuracies ranging from ranging from 84.1% to 98.4%.
Robotics, Perception and Human-Machine Interaction
Yi Guo
Researcher, School of Engineering & Science
The project develops a robot motion planner with human-like navigations features for improved pedestrian/mobile-robot navigation through crowded and unconstrained environments. Current mobile robot motion planners are inadequate because current models of pedestrian dynamics do not fully capture the complexity of human motion behavior in crowds. The project will use novel machine learning techniques to extract features related to pedestrian behavior from existing datasets and then train a deep neural network to model pedestrian dynamics. The project will also study socially normative behaviors and criteria associated with robot navigation. The project contributes to human-robot interaction, and may result in public safety applications such as emergency evacuations and crowd planning/management.
Learning-Based Methods to Improve Accuracy of Wearable Motion Capture Systems
Damiano Zanotto
Researcher, School of Engineering & Science
Presenters: Huanghe Zhang, Ton Duong, School of Engineering & Science
Robotic exoskeletons have shown great potential in both healthcare and military applications in recent years. The trend toward soft wearable robotic systems, or exosuits, creates a need for new and reliable sensors for closed-loop control, which do not require a rigid mounting frame. Among the available alternatives for kinematic measurements, inertial measurement units (IMUs) are a preferred choice because they are lightweight, low-cost, and widely available in the market. However, sensor drift and IMU-to-segment misalignment still represent major problems in applications requiring high accuracy. In the Wearable Robotic Systems Lab, we have been developing calibration algorithms that leverage the vast expressive capability of machine learning regression and the periodic nature of human locomotion to obtain accurate and drift-free stride-to-stride estimates of kinematic and kinetic parameters from raw sensors data.
In this demo session, our group will present a custom-designed wearable inertial motion capture system (IMCS) that consists of instrumented insoles and a belt module. The belt module includes three 9-degree-of-freedom (DOF) IMUs located on the sacrum and on the lateral side of each thigh. The instrumented footwear includes a multi-cell piezo-resistive insole and a 9-DOF IMU sandwiched between two layers of abrasion-resistive foam. Data are streamed wirelessly to a data logger and to a graphical user interface (GUI). Synchronization procedures make it possible to temporally align data from the wireless sensors within 2 ms. A new IMU-to-segment functional calibration procedure compensates for IMU-to-segment misalignments. Learning-based regression models reduce systematic errors and drift, leading to <2% RMS errors.
This platform technology is currently being used as a research tool in several healthcare-related projects. The IMCS will quantify changes in gait parameters pre/post hip arthroscopy in young athletes. The instrumented footwear will be used as part of a robot-assisted gait exercise protocol for older adults. A pediatric version of the insoles was also developed and is currently being used for unobtrusive gait assessments in children with autism spectrum disorder (ASD) and spinal muscular atrophy (SMA).
Ramana Vinjamuri
Researcher
Presenter: Marty Burns, School of Engineering & Science
The Hand Exoskeleton with Embedded Synergies (HEXOES) is a soft glove-based cable driven robotic exoskeleton. The system provides independent actuation of 10 joints of the hand using a remote actuator assembly and a lightweight hand component. The actuator assembly consists of 10 linear motors, embedded position and force sensors, and relevant electronics (microcontrollers, etc.) which enable actuation to result in movements similar to activities of daily living. The actuator assembly is connected to the hand component through a three foot long bundle of cable paths and sensor wires. The hand component, weighing 258g, actuates the metacarpophalangeal (MCP) and proximal interphalangeal (PIP) joints of each finger and thumb in flexion. Passive extension is provided by adjustable springs on the dorsal part of the hand. Design features allow individuals with various hand sizes to use the same exoskeleton effectively. Flex sensors placed over the actuated joints along with position and force sensors on the robot’s linear actuators enable closed-loop control. Biomimetic control mechanisms (patented) are embedded into on-board memories of microcontrollers.
Minimalistic and Learning-Enabled Autonomous Navigation with Unmanned Ground Vehicles
Brendan Englot
Researcher, School of Engineering & Science
We will demonstrate novel autonomous navigation capabilities that allow our small electric unmanned ground vehicle to localize itself accurately, build a descriptive terrain map, and perform path planning supported only by a low-cost 16-beam lidar and an embedded computer. Localization will be achieved using a new algorithm for Lightweight, Ground-Optimized Lidar Odometry and Mapping (LeGO-LOAM) that relies on small quantities of features to support low-drift scan-matching. Terrain mapping will be achieved by framing the estimation of terrain height and the classification of terrain traversability as supervised learning problems, and applying Bayesian generalized kernel inference to solve them. Finally, both of these products will be used to support efficient path planning and navigation.
Societal Impact
Michael S. Kowal
Researcher, College of Arts & Letters
Scholars have begun to conceptualize political parties as a loose confederation of interest groups, candidates, and formal party organizations known as the extended party network (EPN). EPNs form the backbone of the modern partisan strategy, and provide the connections necessary for collaboration in the era of decentralized party power structures. An important aspect of the modern campaign is the ability to get the message out to voters. Through campaign advertising, social media, and other forms of communication, campaigns, candidates, and interest groups devote enormous amounts of time to getting out their message and narratives. However, little research has been conducted about how issues and messages may diffuse throughout the network. With a study of the flow and adoption of messages through the text of social media posts, it is possible to better understand the party network itself.
I present a network and text analytic approach to the diffusion and adoption of messages through not only the text of campaign ads, but also the overlap and similarity of posts by formal party organizations, candidate campaigns, and interest groups on social media. Using the text of Facebook and Twitter posts of campaigns, candidates, party committees, and interests groups in the 2016 election, this project takes a two-step approach to understanding partisan collaboration and the spread of messages. First, I analyze the text of these campaign messages to determine the most important issues for each actor. This study also employs a text reuse approach to determine the similarity of posts and campaign ads to one another. I measure how similar the messages between any two interest groups or candidates may be. I use temporal analysis for not only the similarity of messages, but also the diffusion of messages through the network. Secondly, I use these measures to construct a network of the 2016 campaign.
Jeffrey V. Nickerson
Researcher, School of Business
Presenters: Lei Zheng, Mai Feng, Deborah M. Gordan & Jeffrey V. Nickerson, School of Business
Online communities often depend on implicit forms of coordination, given the fluidity of their membership and the lack of traditional hierarchies and associated incentive structures. This coordination drives knowledge production. Studying temporal dynamics may help elucidate how coordination happens. Specifically, the rate of interaction with an artifact such as a Wikipedia page can function as a signal that affects future interaction. This is a special case of stigmergic coordination, prevalent in biological settings. Many activities can be characterized as bursty, meaning activity is not evenly spread or random, but is instead concentrated. Analyzing 3260 Wikipedia articles, this study shows that the coordination pattern in the Wikipedia community is bursty. Moreover, article burstiness is a predictor of article quality, as tested by a regression that controls for other variables related to the writing process. A mechanism based on excitation and inhibition caused by the rate of interaction with articles can explain the burstiness of editing. This mechanism is described and discussed through agent-based simulation experiments and tested against empirical distributions. This work highlights the important role temporal dynamics can play in the coordination process in online communities, and how it can affect the quality of knowledge production.
Jeffrey V. Nickerson
Researcher, School of Business
Presenters: Ramana Nagasamudram, Aubhik Mazumdar, and Jeffrey V. Nickerson, School of Business
A system enabling the study of remixing in a collaborative environment is described. Determining how the knowledge of one design affects another has in the past been difficult. However, remixing networks provide a path to understanding. Moreover, providing the ability to remix designs encourages involvement and accelerates the collective design process. The system understands the current remixing network and recommends designs that will increase the likelihood of covering the design space and thereby discovering innovative designs. Incorporating these techniques in industry environments may help accelerate innovation. The system built to create remixing experiments is applied to the design of simultaneous contest conditions in the 3D printing domain.