ICML-2001 Workshops


Note that workshops will be held on Thursday, June 28. The conference technical program begins on June 29.


Machine Learning for Spatial and Temporal Data

What is the next generic tool for machine learning? This workshop will explore how it might be possible to construct a general purpose tool for learning from temporal and spatial data. Many emerging applications of machine learning require learning a mapping y = F(x) where the x's and the y's are complex objects such as time series, sequences, 2-dimensional maps, images, GIS layers, etc. Examples of such applications include various forms of information extraction, landcover prediction in remote sensing, protein secondary structure prediction, identifying fraudulent transactions, computer intrusion detection, and classical problems such as text-to-speech mapping and speech recognition. The purpose of this workshop is to bring together researchers from several fields to discuss research and application challenges in this area. Specifically, we will ask the participants to identify the various existing approaches to learning from spatial and temporal data, the state of the underlying theory, the state of existing tools and tool kits, and the prospects for developing new off-the-shelf tools.

Organizers:

Thomas G. Dietterich, Oregon State University, tgd@cs.orst.edu

Foster Provost, NYU, fprovost@stern.nyu.edu

Padhraic Smyth, UC Irvine, smyth@ics.uci.edu



Hierarchy and Memory in Reinforcement Learning

In recent years, much of the research in reinforcement learning has focused on learning, planning, and representing knowledge at multiple levels of temporal abstraction. If reinforcement learning is to scale to solving larger, more real-world-like problems, it is essential to consider hierarchical approaches in which complex learning tasks are decomposed into subtasks. It has been shown in recent and past work that a hierarchical approach substantially increases the efficiency and abilities of RL systems. Some recent work has shown additional benefits from using memory in reinforcement learning, both alone and in combination with hierarchical approaches. This workshop will be an opportunity for the researchers in this growing field to share knowledge and expertise on the topic, open lines of communication for collaboration, prevent redundant research, and possibly agree on standard problems and techniques.

Organizers:

David Andre, UC Berkeley, dandre@cs.berkeley.edu

Anders Jonsson, Univ. of Massachusetts, Amherst, ajonsson@cs.umass.edu