Home »

 

Datasets and Resources

 
  • SEAGULL Dataset: Multi-camera multi-spectrum Ocean images from UAV point of view, 2013 – 2015. Dataset page.
    A multi-camera multi-spectrum (visible,infra-red, near infra-red and hiperspectral) image sequences dataset for research on sea monitoring and surveillance. The image sequences are recorded from the point of view of a fixed wing unmanned aerial vehicle (UAV) flying above the Atlantic Ocean and over boats, life rafts and other objects, as well as over fish oil spills.
  • HDA Dataset: High Definition Analytics Camera Network at ISR, July 2013. Dataset page.
    HDA is a multi-camera high-resolution image sequence dataset for research on high-definition surveillance. 18 cameras (including VGA, HD and Full HD resolution) were recorded simultaneously during 30 minutes in a typical indoor office scenario at a busy hour (lunch time) involving more than 80 persons. Each frame is labeled with Bounding Boxes tightly adjusted to the visible body of the persons, the unique identification of each person, and flag bits indicating occlusion and crowd.
    The following figures show examples of labeled frames: (a) an unoccluded person; (b) two occluded people; (c) a crowd with three people in front.
  • A Dataset on Visual Affordances of Objects and Tools, September 2016. Available on this GitHub repository.
    This dataset contains results of trials in which the robot executes different actions with multiple tools on various objects. In total, there are 11 objects, 4 actions, 3 tools and at least 10 repetitions of each trial, which sums up to ~1320 unique trials and 5280 unique images of resolution 320×200 pixels. The images are captured from left and right cameras of the robot before and after executing a successful action. We also provide the foreground segmented images of each trial. Moreover, the 3D position of the object together with some extracted features are also available. For more information, please contact adehban at isr dot tecnico dot ulisboa dot pt.
    If you use this dataset in your work, please cite the following publication(s):

    • Towards learning deep features for multi-modal inference with robotic data. Dehban, A., Saponaro, G., Zhang, S., Jamone, L., Kampff, A. R. and Santos-Victor, J. (2017). International Journal of Robotics Research (submitted).
    • A Moderately Large Size Dataset to Learn Visual Affordances of Objects and Tools Using iCub Humanoid Robot. Dehban, A., Jamone, L., Kampff, A. R. and Santos-Victor, J. (2016). 1st Workshop on Action and Anticipation for Visual Learning, European Conference on Computer Vision (ECCV).
  • CAD models (PDMS molds) for tactile sensors of Vizzy humanoid robot, September 2016. Download link: tactile_sensors_pdms_molds.zip.
  • Improved annotation for the INRIA person data set, Matteo Taiana, 2014
    The INRIA person data set is very popular in the Pedestrian Detection community, both for training detectors and reporting results. Yet, its labelling has some limitations: some of the pedestrians are not labelled, there is no specific label for the ambiguous cases and the information on the visibility ratio of each person is missing. We collected a new labelling that overcomes such limitations and can be used to evaluate the performance of detection algorithms in a more truthful way. It also allows researchers to test the influence of partial occlusion and pedestrian height during training training on the detection performance. The labels are encoded in the format used for the Caltech Pedestrian Detection Benchmark, in two vbb files.

    Related papers: Neurocomputing 2014, IbPRIA 2013. Related video talk.

  • Real-World Data for 3D Reconstruction Algorithms: Tracked features in uncalibrated images, Etienne Grossmann, July 1999