<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>Repositório Coleção:</title>
  <link rel="alternate" href="http://hdl.handle.net/10071/145" />
  <subtitle />
  <id>http://hdl.handle.net/10071/145</id>
  <updated>2026-04-13T13:00:36Z</updated>
  <dc:date>2026-04-13T13:00:36Z</dc:date>
  <entry>
    <title>Enhanced multiple instance learning for breast cancer detection in mammography: Adaptive patching, advanced pooling, and deep supervision</title>
    <link rel="alternate" href="http://hdl.handle.net/10071/35734" />
    <author>
      <name>Sarwar, Fareeha</name>
    </author>
    <author>
      <name>Garrido, Nuno Miguel de Figueiredo</name>
    </author>
    <author>
      <name>Sebastiao, Pedro</name>
    </author>
    <author>
      <name>Silveira, Margarida</name>
    </author>
    <id>http://hdl.handle.net/10071/35734</id>
    <updated>2025-12-11T19:24:11Z</updated>
    <published>2025-07-01T00:00:00Z</published>
    <summary type="text">Título próprio: Enhanced multiple instance learning for breast cancer detection in mammography: Adaptive patching, advanced pooling, and deep supervision
Autoria: Sarwar, Fareeha; Garrido, Nuno Miguel de Figueiredo; Sebastiao, Pedro; Silveira, Margarida
Resumo: This paper addresses the challenge of weakly supervised learning for breast cancer detection in mammography by introducing an Enhanced Embedded Space MI-Net model with deep supervision. The framework integrated adaptive patch creation, convolution feature extraction, and pooling methods -max, mean, log-sum-expo, attention, and gated attention pooling - evaluated in three MIL models, Instance Space mi-Net, Embedded Space MI-Net and Enhanced Embedded Space MI-Net. A key contribution is the incorporation of deep supervision, improving feature learning across network layers and enhancing bag-level classification performance. Experimental results on the CBIS / DDSM dataset demonstrate that the Enhanced MI-Net model achieves the highest AUC of 86% with attention pooling. This work addresses the gap in leveraging MIL techniques for high-resolution medical imaging without requiring detailed annotations, offering a robust and scalable solution for breast cancer detection.Clinical Relevance-This study highlights the potential of MIL-based models with attention pooling to accurately detect breast cancer in mammographic images without requiring detailed ROI annotations, offering a scalable and efficient diagnostic tool for clinical practice.</summary>
    <dc:date>2025-07-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Livro de atas do SASIG 2015</title>
    <link rel="alternate" href="http://hdl.handle.net/10071/10301" />
    <author>
      <name />
    </author>
    <id>http://hdl.handle.net/10071/10301</id>
    <updated>2018-02-02T16:02:27Z</updated>
    <published>2015-12-01T00:00:00Z</published>
    <summary type="text">Título próprio: Livro de atas do SASIG 2015
Editor: Costa, Carlos; Ferreira, Victor; Santos, Hugo; Pereira, Pedro; Carreira, Duarte; Gil, Artur
Resumo: Em 2015 a OSGeo-PT, Capítulo Local Português da OSGeo - Open Source Geospatial Foundation, em colaboração com o ISCTE-IUL, promoveu a 6ª edição da SASIG - Conferência Nacional de Software Aberto para Sistemas de Informação Geográfica.</summary>
    <dc:date>2015-12-01T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Multidimensional analysis of public open spaces: urban morphology, parametric modelling and data mining</title>
    <link rel="alternate" href="http://hdl.handle.net/10071/9686" />
    <author>
      <name>Lopes, João V.</name>
    </author>
    <author>
      <name>Paio, Alexandra</name>
    </author>
    <author>
      <name>Beirão, José N.</name>
    </author>
    <author>
      <name>Pinho, Eliana Manuel</name>
    </author>
    <author>
      <name>Nunes, Luís</name>
    </author>
    <id>http://hdl.handle.net/10071/9686</id>
    <updated>2018-02-02T16:03:25Z</updated>
    <published>2015-09-09T00:00:00Z</published>
    <summary type="text">Título próprio: Multidimensional analysis of public open spaces: urban morphology, parametric modelling and data mining
Autoria: Lopes, João V.; Paio, Alexandra; Beirão, José N.; Pinho, Eliana Manuel; Nunes, Luís
Resumo: Public open spaces (parks, squares and other gathering places) can only be grasped from a simultaneous view of their attributes. In an ongoing Phd research project we propose to overcome the limitations of traditional-descriptive urban morphology methods in dealing with this simultaneity derived from their many shapes, functions, uses and relations within the urban structure. After developing the relations between formal attributes and intangible spatial properties, their&#xD;
identity and proximity may be disclosed by multivariate statistical analysis and data mining techniques. We outline a multidimensional method for the synchronic analysis and classification of the public open spaces departing from a research corpus of 126 Portuguese urban squares, whose analysis is intended to interactively (re)define it. The work done so far is presented, which comprises: (i) firming the concepts, criteria and attributes to extract; (ii) survey on theories, methods and spatial analysis tools and shortcomings identification; (iii) adaptation and/or creation of new methods and tools; (iv) creation of databases from CAD and GIS environments; (v) research on multivariate analysis, data mining and data visualization techniques.</summary>
    <dc:date>2015-09-09T00:00:00Z</dc:date>
  </entry>
  <entry>
    <title>Human Activity Recognition and Prediction</title>
    <link rel="alternate" href="http://hdl.handle.net/10071/9466" />
    <author>
      <name>Jardim, David</name>
    </author>
    <author>
      <name>Nunes, Luís</name>
    </author>
    <author>
      <name>Dias, Miguel Sales</name>
    </author>
    <id>http://hdl.handle.net/10071/9466</id>
    <updated>2018-02-02T16:01:28Z</updated>
    <published>2015-07-28T00:00:00Z</published>
    <summary type="text">Título próprio: Human Activity Recognition and Prediction
Autoria: Jardim, David; Nunes, Luís; Dias, Miguel Sales
Resumo: Human activity recognition (HAR) has become one of the most active research topics in image processing and pattern recognition (Aggarwal, J. K. and Ryoo, M. S., 2011). Detecting specific activities in a live feed or searching in video archives still relies almost completely on human resources. Detecting multiple activities in real-time video feeds is currently performed by assigning multiple analysts to simultaneously watch the same video stream. Manual analysis of video is labor intensive, fatiguing, and error prone. Solving the problem of recognizing human activities from video can lead to improvements in several applications fields like in surveillance systems, human computer interfaces, sports video analysis, digital shopping assistants, video retrieval, gaming and health-care (Popa et al., n.d.; Niu, W. et al., n.d.; Intille, S. S., 1999; Keller, C. G., 2011). This area has grown dramatically in the past 10 years, and throughout our research we identified a potentially underexplored sub-area: Action Prediction. What if we could infer the future actions of people from visual input? We propose to expand the current vision-based activity analysis to a level where it is possible to predict the future actions executed by a subject. We are interested in interactions which can involve a single actor, two humans and/or simple objects. For example try to predict if “a person will cross the street” or “a person will try to steal a handbag from another” or where will a tennis-player target the next volley. Using a hierarchical approach we intend to represent high-level human activities that are composed of other simpler activities, which are usually called sub-events which may themselves be decomposable. We expect to develop a system capable of predicting the next action in a sequence initially using offline-learning to bootstrap the system and then with self-improvement/task specialization in mind, using online-learning.</summary>
    <dc:date>2015-07-28T00:00:00Z</dc:date>
  </entry>
</feed>

