Studies in Health Technology and Informatics, 2021
2019 IEEE International Conference on Big Data (Big Data)
The abandonment rate of patients who use CPAP devices for obstructive sleep apnea (OSA) therapy is as high as 60%. However, there is growing evidence that timely and appropriate intervention can improve long-term adherence to therapy. Current practice in sleep clinics of identifying potential patients who will abandon the treatment is not sufficiently effective in terms of accuracy and timeliness. Recent proposals in the literature have tried to identify non-adherent patients in a specific period of their therapy; however, there is no generalized approach by which clinical providers can monitor their patients continually with the goal of maximizing adherence. Towards this more generic goal, we propose CTAP-CPAP, a Continuous Treatment Adherence Prediction framework. With CTAP-CPAP, we address the problem of generalizing the prediction for any day in the treatment, where a robust framework with multiple machine learning models is implemented to assist medical practitioners keep track of the patient risk of non-adherence. Aiming the parallel progress of both machine learning and health informatics fields, we complement the study with a transparent discussion on the machine learning techniques used to build CTAP-CPAP and our view of its operationalization in a sleep clinic.
Design of Medical Devices Conference, 2020
Upper-Airway Stimulation (UAS) therapy is an innovative alternative to Continuous Positive Airway Pressure (CPAP) treatment for patients with obstructive sleep apnea (OSA) and CPAP intolerance. Patients who have implanted a UAS device are responsible for activating and managing the therapy at home before sleep. Consistent nightly use is required for a reduced OSA burden, measured by the apnea-hypopnea index. Thus, understanding patient behavior and possible challenges to nightly use are crucial to therapy success. In this work, we present two novel visualizations to monitor telemetry data recorded by the UAS sleep remote. They provide doctors and sleep clinicians with detailed information to easily classify therapy use and sleep patterns. We also present how to show daily metrics such as hours of average usage, therapy intensity, and duration of therapy pauses, to identify optimal therapy settings and measure the long-term effectiveness of interventions.
We present an interactive tool that visualizes data on aware-ness of several health conditions in the Middle East. The un-derlying data is obtained via Facebook’s Marketing API and includes rich demographic details. We discuss how this tool may be useful for planning more targeted public health cam-paigns and for monitoring campaign effectiveness.Application URL:http://scdev5.qcri.org/sha/
Every day, millions of users reveal their interests on Facebook, which are then monetized via targeted advertisement marketing campaigns. In this paper, we explore the use of demographically rich Facebook Ads audience estimates for tracking non-communicable diseases around the world. Across 47 countries, we compute the audiences of marker interests, and evaluate their potential in tracking health conditions associated with tobacco use, obesity, and diabetes, compared to the performance of placebo interests. Despite its huge potential, we find that, for modeling prevalence of health conditions across countries, differences in these interest audiences are only weakly indicative of the corresponding prevalence rates. Within the countries, however, our approach provides interesting insights on trends of health awareness across demographic groups. Finally, we provide a temporal error analysis to expose the potential pitfalls of using Facebook's Marketing API as a black box.
EPJ Data Science , 2016
In the last few years thousands of scientific papers have investigated sentiment analysis, several startups that measure opinions on real data have emerged and a number of innovative products related to this theme have been developed. There are multiple methods for measuring sentiments, including lexical-based and supervised machine learning methods. Despite the vast interest on the theme and wide popularity of some methods, it is unclear which one is better for identifying the polarity (i.e., positive or negative) of a message. Accordingly, there is a strong need to conduct a thorough apple-to-apple comparison of sentiment analysis methods, as they are used in practice, across multiple datasets originated from different data sources. Such a comparison is key for understanding the potential limitations, advantages, and disadvantages of popular methods. This article aims at filling this gap by presenting a benchmark comparison of twenty-four popular sentiment analysis methods (which we call the state-of-the-practice methods). Our evaluation is based on a benchmark of eighteen labeled datasets, covering messages posted on social networks, movie and product reviews, as well as opinions and comments in news articles. Our results highlight the extent to which the prediction performance of these methods varies considerably across datasets. Aiming at boosting the development of this research area, we open the methods’ codes and datasets used in this article, deploying them in a benchmark system, which provides an open API for accessing and comparing sentence-level sentiment analysis methods.
Several messages express opinions about events, products, and services, political views or even their author’s emotional state and mood. Sentiment analysis has been used in several applications including analysis of the repercussions of events in social networks, analysis of opinions about products and services, and simply to better understand aspects of social communication in Online Social Networks (OSNs). There are multiple methods for measuring sentiments, including lexical-based approaches and supervised machine learning methods. Despite the wide use and popularity of some methods, it is unclear which method is better for identifying the polarity (i.e., positive or negative) of a message as the current literature does not provide a method of comparison among existing methods. Such a comparison is crucial for understanding the potential limitations, advantages, and disadvantages of popular methods in analyzing the content of OSNs messages. Our study aims at filling this gap by presenting comparisons of eight popular sentiment analysis methods in terms of coverage (i.e., the fraction of messages whose sentiment is identified) and agreement (i.e., the fraction of identified sentiments that are in tune with ground truth). We develop a new method that combines existing approaches, providing the best coverage results and competitive agreement. We also present a free Web service called iFeel, which provides an open API for accessing and comparing results across different sentiment methods for a given text.
Sentiment analysis has become a key tool for several social media applications, including analysis of user’s opinions about products and services, support to politics during campaigns and even for market trending. There are multiple existing sentiment analysis methods that explore different techniques, usually relying on lexical resources or learning approaches. Despite the large interest on this theme and amount of research efforts in the field, almost all existing methods are designed to work with only English content. Most existing strategies in specific languages consist of adapting existing lexical resources, without presenting proper validations and basic baseline comparisons. In this paper, we take a different step into this field. We focus on evaluating existing efforts proposed to do language specific sentiment analysis. To do it, we evaluated twenty-one methods for sentence-level sentiment analysis proposed for English, comparing them with two language-specific methods. Based on nine language-specific datasets, we provide an extensive quantitative analysis of existing multi-language approaches. Our main result suggests that simply translating the input text on a specific language to English and then using one of the existing English methods can be better than the existing language specific efforts evaluated. We also rank those implementations comparing their prediction performance and identifying the methods that acquired the best results using machine translation across different languages. As a final contribution to the research community, we release our codes and datasets. We hope our effort can help sentiment analysis to become English independent.
Sentiment analysis methods are used to detect polarity in thoughts and opinions of users in online social media. As businesses and companies are interested in knowing how social media users perceive their brands, sentiment analysis can help better evaluate their product and advertisement campaigns. In this paper, we present iFeel, a Web application that allows one to detect sentiments in any form of text including unstructured social media data. iFeel is free and gives access to seven existing sentiment analysis methods: SentiWordNet, Emoticons, PANAS-t, SASA, Happiness Index, SenticNet, and SentiStrength. With iFeel, users can also combine these methods and create a new Combined-Method that achieves high coverage and F-measure. iFeel provides a single platform to compare the strengths and weaknesses of various sentiment analysis methods with a user friendly interface such as file uploading, graphical visualizing, and weight tuning.