TOP

Accepted papers journal track

Data Mining and Knowledge Discovery Machine Learning
Data_Mining_and_Knowledge_Discovery
Machine_Learning

Data Mining and Knowledge Discovery

  • Session: 6 | Room: 106 | Time: 15:20 – 15:40, Tuesday 16 Sept Unsupervised Interaction-Preserving Discretization of Multivariate Data (Hoang-Vu Nguyen, Emmanuel Müller, Jilles Vreeken, Klemens Bohm)

    Authors

    Hoang-Vu Nguyen
    Emmanuel Müller
    Jilles Vreeken
    Klemens Bohm

    Abstract

    Discretization is the transformation of continuous data into discrete bins. It is an important and general pre-processing technique, and a critical element of many data mining and data management tasks. The general goal is to obtain data that retains as much information in the continuous original as possible. In general, but in particular for exploratory tasks, a key open question is how to discretize multivariate data such that significant associations and patterns are preserved. That is exactly the problem we study in this paper. We propose IPD, an information-theoretic method for unsupervised discretiza- tion that focuses on preserving multivariate interactions. To this end, when discretiz- ing a dimension, we consider the distribution of the data over all other dimensions. In particular, our method examines consecutive multivariate regions and combines them if (a) their multivariate data distributions are statistically similar, and (b) this merge reduces the MDL encoding cost. To assess the similarities, we propose ID, a novel interaction distance that does not require assuming a distribution and permits computation in closed form. We give an efficient algorithm for finding the optimal bin merge, as well as a fast well-performing heuristic. Empirical evaluation through pattern-based compression, outlier mining, and clas- sification shows that by preserving interactions we consistently outperform the state of the art in both quality and speed.

  • Session: 10 | Room: 105 | Time: 16:10 – 16:30, Tuesday 16 Sept Learning about meetings (Been Kim, Cynthia Rudin)

    Authors

    Been Kim
    Cynthia Rudin

    Abstract

    Most people participate in meetings almost every day, multiple times a day. The study of meetings is important, but also challenging, as it requires an understanding of social signals and complex interpersonal dynamics. Our aim this work is to use a data-driven approach to the science of meetings. We provide tentative evidence that: i) it is possible to automatically detect when during the meeting a key decision is taking place, from analyzing only the local dialogue acts, ii) there are common patterns in the way social dialogue acts are interspersed throughout a meeting, iii) at the time key decisions are made, the amount of time left in the meeting can be predicted from the amount of time that has passed, iv) it is often possible to predict whether a proposal during a meeting will be accepted or rejected based entirely on the language (the set of persuasive words) used by the speaker.

  • Session: 10 | Room: 105 | Time: 16:30 – 16:50, Tuesday 16 Sept Approximating the Crowd (Seyda Ertekin, Cynthia Rudin, Haym Hirsh)

    Authors

    Seyda Ertekin
    Cynthia Rudin
    Haym Hirsh

    Abstract

    The problem of "approximating the crowd" is that of estimating the crowd’s majority opinion by querying only a subset of it. Algorithms that approximate the crowd can intelligently stretch a limited budget for a crowdsourcing task.We present an algorithm, "CrowdSense," that works in an online fashion where items come one at a time. CrowdSense dynamically samples subsets of the crowd based on an exploration/exploitation criterion. The algorithm produces a weighted combination of the subset’s votes that approximates the crowd’s opinion. We then introduce two variations of CrowdSense that make various distributional approximations to handle distinct crowd characteristics. In particular, the first algorithm makes a statistical independence approximation of the labelers for large crowds, whereas the second algorithm finds a lower bound on how often the current sub-crowd agrees with the crowd’s majority vote. Our experiments on CrowdSense and several baselines demonstrate that we can reliably approximate the entire crowd’s vote by collecting opinions from a representative subset of the crowd.

  • Session: 11 | Room: 106 | Time: 17:30 – 17:50, Tuesday 16 Sept Uncovering the Plot: Detecting Surprising Coalitions of Entities in Multi-Relational Schemas (Hao Wu, Jilles Vreeken, Nikolaj Tatti, Naren Ramakrishnan)

    Authors

    Hao Wu
    Jilles Vreeken
    Nikolaj Tatti
    Naren Ramakrishnan

    Abstract

    Many application domains such as intelligence analysis and cybersecu- 2 rity require tools for the unsupervised identification of suspicious entities in multi- 3 relational/network data. In particular, there is a need for automated semi-automated 4 approaches to ‘uncover the plot’, i.e., to detect non-obvious coalitions of entities bridg- 5 ing many types of relations. We cast the problem of detecting such suspicious coalitions 6 and their connections as one of mining surprisingly dense and well-connected chains of biclusters over multi-relational data. With this as our goal, we model data by the Maxi- 8 mum Entropy principle, such that in a statistically well-founded way we can gauge the 9 surprisingness of a discovered bicluster chain with respect to what we already know. 10 We design an algorithm for approximating the most informative multi-relational pat- 11 terns, and provide strategies to incrementally organize discovered patterns into the 12 background model. We illustrate how our method is adept at discovering the hidden 13 plot in multiple synthetic and real-world intelligence analysis datasets. Our approach 14 naturally generalizes traditional attribute-based maximum entropy models for single 15 relations, and further supports iterative, human-in-the-loop, knowledge discovery.

  • Session: 14 | Room: 103-104 | Time: 10:10 – 10:30, Wednesday 17 Sept Overlapping community detection in labeled graphs (Esther Galbrun, Aristides Gionis, Nikolaj Tatti)

    Authors

    Esther Galbrun
    Aristides Gionis
    Nikolaj Tatti

    Abstract

    We present a new approach for the problem of finding overlapping communities in graphs and social networks. Our approach consists of a novel problem definition and three accompanying algorithms. We are particularly interested in graphs that have labels on their vertices, although our methods are also applicable to graphs with no labels. Our goal is to find k communities so that the total edge density over all k communities is maximized. In the case of labeled graphs, we require that each community is succinctly described by a set of labels. This requirement provides a better understanding for the discovered communities. The proposed problem formulation leads to the discovery of vertex-overlapping and dense communities that cover as many graph edges as possible. We capture these properties with a simple objective function, which we solve by adapting efficient approximation algorithms for the generalized maximum-coverage problem and the densest-subgraph problem. Our proposed algorithm is a generic greedy scheme. We experiment with three variants of the scheme, obtained by varying the greedy step of finding a dense subgraph. We validate our algorithms by comparing with other state-of-the-art community- detection methods on a variety of performance measures. Our experiments confirm that our algorithms achieve results of high quality in terms of the reported measures, and are practical in terms of performance.

  • Session: 15 | Room: 102 | Time: 12:00 – 12:20, Wednesday 17 Sept Invariant Time-Series Factorization (Josif Grabocka, Lars Schmidt-Thieme)

    Authors

    Josif Grabocka
    Lars Schmidt-Thieme

    Abstract

    Time-series analysis is an important domain of machine learning and a plethora of methods have been developed for the task. This paper proposes a new representation of time series, which in contrast to existing approaches, decomposes a time-series dataset into latent patterns and membership weights of local segments to those patterns. The process is formalized as a constrained objective function and a tailored stochastic coordinate descent optimization is applied. The time-series are projected to a new feature representation consisting of the sums of the membership weights, which captures frequencies of local patterns. Features from various sliding window sizes are concatenated in order to encapsulate the interaction of patterns from different sizes. The derived representation offers a set of features that boosts classification accuracy. Finally, a large-scale experimental comparison against 11 baselines over 43 real life datasets, indicates that the proposed method achieves state-of-the-art prediction accuracy results.

  • Session: 17 | Room: 105 | Time: 11:40 – 12:00, Wednesday 17 Sept Leveraging the power of local spatial autocorrelation in geophysical interpolative clustering (Annalisa Appice, Donato Malerba)

    Authors

    Annalisa Appice
    Donato Malerba

    Abstract

    Nowadays ubiquitous sensor stations are deployed worldwide, in order to measure several geophysical variables (e.g. temperature, humidity, light) for a growing number of ecological and industrial processes. Although these variables are, in general, measured over large zones and long (potentially unbounded) periods of time, stations cannot cover any space location. On the other hand, due to their huge volume, data produced cannot be entirely recorded for future analysis. In this scenario, summarization, i.e. the computation of aggregates of data, can be used to reduce the amount of produced data stored on the disk, while interpolation, i.e. the estimation of unknown data in each location of interest, can be used to supplement station records. We illustrate a novel data mining solution, named interpolative clustering, that has the merit of addressing both these tasks in time-evolving, multivariate geophysical applications. It yields a time-evolving clustering model, in order to summarize geophysical data and computes a weighted linear combination of cluster prototypes, in order to predict data. Clustering is done by accounting for the local presence of the spatial autocorrelation property in the geophysical data. Weights of the linear combination are defined, in order to reflect the inverse distance of the unseen data to each cluster geometry. The cluster geometry is represented through shape-dependent sampling of geographic coordinates of clustered stations. Experiments performed with several data collections investigate the trade-off between the summarization capability and predictive accuracy of the presented interpolative clustering algorithm.

  • Session: 17 | Room: 105 | Time: 12:00 – 12:20, Wednesday 17 Sept Detecting Localized Homogeneous Anomalies over Spatio-Temporal Data (Aditya Telang, Deepak P, Salil Joshi, Prasad Deshpande, Ranjana Rajendran)

    Authors

    Aditya Telang
    Deepak P
    Salil Joshi
    Prasad Deshpande
    Ranjana Rajendran

    Abstract

    The last decade has witnessed an unprecedented growth in availability of data having spatio-temporal characteristics. Given the scale and richness of such data, finding spatio-temporal patterns that demonstrate significantly different behavior from their neighbors could be of interest for various application scenarios such as—weather modeling, analyzing spread of disease outbreaks, monitoring traffic congestions, and so on. In this paper, we propose an automated approach of exploring and discovering such anomalous patterns irrespective of the underlying domain from which the data is recovered. Our approach differs significantly from traditional methods of spatial outlier detection, and employs two phases—(i) discovering homogeneous regions, and (ii) evalating these regions as anomalies based on their statistical difference from a generalized neighborhood. We evaluate the quality of our approach and distinguish it from existing techniques via an extensive experimental evaluation.

  • Session: 17 | Room: 105 | Time: 12:20 – 12:40, Wednesday 17 Sept Confidence bands for time series data (Jussi Korpela, Kai Puolamäki, Aristides Gionis)

    Authors

    Jussi Korpela
    Kai Puolamäki
    Aristides Gionis

    Abstract

    Simultaneous confidence intervals, or confidence bands, provide an intuitive description of the variability of a time series. Given a set of N time series of length M, we consider the problem of finding a confidence band that contains a (1 − α)-fraction of the observations. We construct such confidence bands by finding the set of N −K time series whose envelope is minimized. We refer to this problem as the minimum width envelope problem. We show that the minimum width envelope problem is NP-hard, and we develop a greedy heuristic algorithm, which we compare to quantile- and distance-based confidence band methods. We also describe a method to find an effective confidence level αeff and an effective number of observations to remove Keff, such that the resulting confidence bands will keep the family-wise error rate below α. We evaluate our methods on synthetic and real datasets. We demonstrate that our method can be used to construct confidence bands with guaranteed familywise error rate control, also when there is too little data for the quantile-based methods to work.

  • Session: 19 | Room: 105-106 | Time: 16:20 – 16:40, Wednesday 17 Sept Self-Organizing Maps by Difference of Convex functions optimization (Hoai An Le Thi, Manh Cuong Nguyen)

    Authors

    Hoai An Le Thi
    Manh Cuong Nguyen

    Abstract

    We offer an efficient approach based on Difference of Convex functions (DC) optimization for Self-OrganizingMaps (SOM). We consider SOM as an optimization problem with a nonsmooth, nonconvex energy function and investigated DC Programming and DC Algorithm (DCA), an innovative approach in nonconvex optimization framework to effectively solve this problem. Furthermore an appropriate training version of this algorithm is proposed. The numerical results on many real-world datasets show the efficiency of the proposed DCA based algorithms on both quality of solutions and topographicmaps.

  • Session: 22 | Room: 101 | Time: 16:20 – 16:40, Wednesday 17 Sept Classy: Fast Clustering Streams of Call-Graphs (Orestis Kostakis)

    Author

    Orestis Kostakis

    Abstract

    An abstraction resilient to common malware obfuscation techniques is the call-graph. A call-graph is the representation of an executable file as a directed graph with labeled vertices, where the vertices correspond to functions and the edges to function calls. Unfortunately, most of the interesting graph comparison problems, including full-graph comparison and computing the largest common subgraph, belong to the NP-hard class. This makes the study and use of graphs in large scale systems difficult. Existing work has focused only on offline clustering and has not addressed the issue of clustering streams of graphs. In this paper we present Classy, a scalable distributed system that clusters streams of large call-graphs for purposes including automated malware classification and facilitating malware analysts. Since algorithms aimed at clustering sets are not suitable for clustering streams of objects, we propose the use of a clustering algorithm that relies on the notion of candidate clusters and reference samples therein. We demonstrate via thorough experimentation that this approach yields results very close to the off-line optimal. Graph similarity is determined by computing a Graph Edit distance (GED) of pairs of graphs using an adapted version of Simulated Annealing. Furthermore, we present a novel lower bound for the GED. We also study the problem of approximating statistics of clusters of graphs when the distances of only a fraction of all possible pairs have been computed. Finally, we present results and statistics from a real production-side system that has clustered and contains more than 0.8 million graphs.

  • Session: 23 | Room: 102 | Time: 10:50 – 11:10, Thursday 18 Sept A Peek into the Black Box: Exploring Classifiers by Randomization (Andreas Henelius, Kai Puolamäki, Henrik Boström, Lars Asker, Panagiotis Papapetrou)

    Authors

    Andreas Henelius
    Kai Puolamäki
    Henrik Boström
    Lars Asker
    Panagiotis Papapetrou

    Abstract

    Classifiers are often opaque and cannot easily be inspected to gain understanding of which factors are of importance. We propose an efficient iterative algorithm to find the attributes and dependencies used by any classifier when making predictions. The performance and utility of the algorithm is demonstrated on two synthetic and 26 real-world datasets, using 15 commonly used learning algorithms to generate the classifiers. The empirical investigation shows that the novel algorithm is indeed able to find groupings of interacting attributes exploited by the different classifiers. These groupings allow for finding similarities among classifiers for a single dataset as well as for determining the extent to which different classifiers exploit such interactions in general.

  • Session: 24 | Room: 105 | Time: 10:10 – 10:30, Thursday 18 Sept Ontology of Core Data Mining Entities (Pance Panov, Larisa Soldatova, Saso Dzeroski)

    Authors

    Pance Panov
    Larisa Soldatova
    Saso Dzeroski

    Abstract

    In this article, we present OntoDM-core, an ontology of core data mining entities. OntoDM-core defines the most essential data mining entities in a three-layered ontological structure comprising of a specification, an implementation and an application layer. It provides a representational framework for the description of mining structured data, and in addition provides taxonomies of datasets, data mining tasks, generalizations, data mining algorithms and constraints, based on the type of data. OntoDM-core is designed to support a wide range of applications/use cases, such as semantic annotation of data mining algorithms, datasets and results; annotation of QSAR studies in the context of drug discovery investigations; and disambiguation of terms in text mining. The ontology has been thoroughly assessed following the practices in ontology engineering, is fully interoperable with many domain resources and is easy to extend. OntoDM-core is available here.

  • Session: 31 | Room: 105 | Time: 17:00 – 17:20, Thursday 18 Sept Generalization-based Privacy Preservation and Discrimination Prevention in Data Publishing and Mining (Sara Hajian, Josep Domingo-Ferrer, Oriol Farras)

    Authors

    Sara Hajian
    Josep Domingo-Ferrer
    Oriol Farras

    Abstract

    Living in the information society facilitates the automatic collection of huge amounts of data on individuals, organizations, etc. Publishing such data for secondary analysis (e.g. learning models and finding patterns) may be extremely useful to policy makers, planners, marketing analysts, researchers and others. Yet, data publishing and mining do not come without dangers, namely privacy invasion and also potential discrimination of the individuals whose data are published. Discrimination may ensue from training data mining models (e.g. classifiers) on data which are biased against certain protected groups (ethnicity, gender, political preferences, etc.). The objective of this paper is to describe how to obtain data sets for publication that are: i) privacy-preserving; ii) unbiased regarding discrimination; and iii) as useful as possible for learning models and finding patterns. We present the first generalization- based approach to simultaneously offer privacy preservation and discrimination prevention. We formally define the problem, give an optimal algorithm to tackle it and evaluate the algorithm in terms of both general and specific data analysis metrics (i.e. various types of classifiers and rule induction algorithms). It turns out that the impact of our transformation on the quality of data is the same or only slightly higher than the impact of achieving just privacy preservation. In addition, we show how to extend our approach to different privacy models and anti-discrimination legal concepts.

  • Session: 31 | Room: 105 | Time: 17:20 – 17:40, Thursday 18 Sept Preserving Worker Privacy in Crowdsourcing (Hiroshi Kajino, Hiromi Arai, Hisashi Kashima)

    Authors

    Hiroshi Kajino
    Hiromi Arai
    Hisashi Kashima

    Abstract

    This paper proposes a crowdsourcing quality control method with worker-privacy preservation. Crowdsourcing allows us to outsource tasks to a number of workers. The results of tasks obtained in crowdsourcing are often low-quality due to the difference in the degree of skill. Therefore, we need quality control methods to estimate reliable results from low-quality results. In this paper, we point out privacy problems of workers in crowdsourcing. Personal information of workers can be inferred from the results provided by each worker. To formulate and to address the privacy problems, we define a worker-private quality control problem, a variation of the quality control problem that preserves privacy of workers. We propose a worker-private latent class protocol where a requester can estimate the true results with worker privacy preserved. The key ideas are decentralization of computation and introduction of secure computation. We theoretically guarantee the security of the proposed protocol and experimentally examine the computational efficiency and accuracy.

  • Session: 33 | Room: 103-104 | Time: 17:00 – 17:20, Thursday 18 Sept Discovering Bands from Graphs (Nikolaj Tatti)

    Author

    Nikolaj Tatti

    Abstract

    Discovering the underlying structure of a given graph is one of the fundamental goals in graph mining. Given a graph, we can often order vertices in a way that neighboring vertices have a higher probability of being connected to each other. This implies that the edges form a band around the diagonal in the adjacency matrix. Such structure may rise for example if the graph was created over time: each vertex had an active time interval during which the vertex was connected with other active vertices. The goal of this paper is to model this phenomenon. To this end, we formulate an optimization problem: given a graph and an integer K, we want to order graph vertices and partition the ordered adjacency matrix into K bands such that bands closer to the diagonal are more dense. We measure the goodness of a segmentation using the log-likelihood of a log-linear model, a flexible family of distributions containing many standard distributions. We divide the problem into two subproblems: finding the order and finding the bands. We show that discovering bands can be done in polynomial time with isotonic regression, and we also introduce a heuristic iterative approach. For discovering the order we use Fiedler order accompanied with a simple combinatorial refinement. We demonstrate empirically that our heuristic works well in practice.

Machine learning

  • Session: 3 | Room: 105 | Time: 12:00 – 12:20, Tuesday 16 Sept The Bane of Skew: Uncertain Ranks and Unrepresentative Precision (Thomas Lampert, Pierre Gançarski)

    Authors

    Thomas Lampert
    Pierre Gançarski

    Abstract

    While a problem’s skew is often assumed to be constant, this paper discusses three settings where this assumption does not hold. Consequently, incorrectly assuming skew to be constant in these contradicting cases results in an over or under estimation of an algorithm’s performance. The area under a precision-recall curve (AUCPR) is a common summary measurement used to report the performance of machine learning algorithms. It is well known that precision is dependent upon class skew, which often varies between datasets. In addition to this, it is demonstrated herein that under certain circumstances the relative ranking of an algorithm (as measured by AUCPR) is not constant and is instead also dependent upon skew. The skew at which the performance of two algorithms inverts and the relationship between precision measured at different skews are defined. This is extended to account for temporal skew characteristics and situations in which skew cannot be precisely defined. Formal proofs for these findings are presented, desirable properties are proved and their application demonstrated.

  • Session: 5 | Room: 105 | Time: 14:20 – 14:40, Tuesday 16 Sept Learning A Priori Constrained Weighted Majority Votes (Aurélien Bellet, Amaury Habrard, Emilie Morvant, Marc Sebban)

    Authors

    Aurélien Bellet
    Amaury Habrard
    Emilie Morvant
    Marc Sebban

    Abstract

    Weighted majority votes allow one to combine the output of several classifiers or voters. MinCq is a recent algorithm for optimizing the weight of each voter based on the minimization of a theoretical bound over the risk of the vote with elegant PAC-Bayesian generalization guarantees. However, while it has demonstrated good performance when combining weak classifiers, MinCq cannot make use of the useful a priori knowledge that one may have when using a mixture of weak and strong voters. In this paper, we propose P-MinCq, an extension of MinCq that can incorporate such knowledge in the form of a constraint over the distribution of the weights, along with general proofs of convergence that stand in the sample compression setting for data-dependent voters. The approach is applied to a vote of k-NN classifiers with a specific modeling of the voters’ performance. P-MinCq significantly outperforms the classic k-NN classifier, a symmetric NN and MinCq using the same voters. We show that it is also competitive with LMNN, a popular metric learning algorithm, and that combining both approaches further reduces the error.

  • Session: 15 | Room: 102 | Time: 12:20 – 12:40, Wednesday 17 Sept SAGA: Sparse And Geometry-Aware non-negative matrix factorization through non-linear local embedding (Nicolas Courty, Xing Gong, Jimmy Vandel, Thomas Burger)

    Authors

    Nicolas Courty
    Xing Gong
    Jimmy Vandel
    Thomas Burger

    Abstract

    This paper presents a new non-negative matrix factorization technique which (1) allows the decomposition of the original data on multiple latent factors accounting for the geometrical structure of the manifold embedding the data; (2) provides an optimal representation with a controllable level of sparsity; (3) has an overall linear complexity allowing handling in tractable time large and high dimensional datasets. It operates by coding the data with respect to local neighbors with non-linear weights. This locality is obtained as a consequence of the simultaneous sparsity and convexity constraints. Our method is demonstrated over several experiments, including a feature extraction and classification task, where it achieves better performances than the state-of-the-art factorization methods, with a shorter computational time.

  • Session: 21 | Room: 103-104 | Time: 16:40 – 17:00, Wednesday 17 Sept Collaborative Filtering with Information-Rich and Information-Sparse Entities (Kai Zhu, Rui Wu, Lei Ying, R. Srikant)

    Authors

    Kai Zhu
    Rui Wu
    Lei Ying
    R. Srikant

    Abstract

    In this paper, we consider a popular model for collaborative filtering in recommender systems. In particular, we consider both the clustering model, where only users (or items) are clustered, and the co-clustering model, where both users and items are clustered, and further, we assume that some users rate many items (information-rich users) and some users rate only a few items (information-sparse users). When users (or items) are clustered, our algorithm can recover the rating matrix with ω(M K log M ) noisy entries while M K entries are necessary, where K is the number of clusters and M is the number of items. In the case of co-clustering, we prove that K^2 entries are necessary for recovering the rating matrix, and our algorithm achieves this lower bound within a logarithmic factor when K is sufficiently large. Extensive simulations on Netflix and MovieLens data show that our algorithm outperforms the alternating minimization (AM) and the popularity-among-friends (PAF) algorithm. The performance difference increases even more when noise is added to the datasets.

  • Session: 21 | Room: 103-104 | Time: 17:00 – 17:20, Wednesday 17 Sept A Constrained Matrix-Variate Gaussian Process for Transposable Data (Oluwasanmi Koyejo, Cheng Lee, Joydeep Ghosh)

    Authors

    Oluwasanmi Koyejo
    Cheng Lee
    Joydeep Ghosh

    Abstract

    Transposable data represents interactions among two sets of entities, and are typically represented as a matrix containing the known interaction values. Additional side information may consist of feature vectors specific to entities corresponding to the rows and/or columns of such a matrix. Further information may also be available in the form of interactions or hierarchies among entities along the same mode (axis). We propose a novel approach for modeling transposable data with missing interactions given additional side information. The interactions are modeled as noisy observations from a latent noise free matrix generated from a matrix-variate Gaussian process. The construction of row and column covariances using side information provides a flexible mechanism for specifying a-priori knowledge of the row and column correlations in the data. Further, the use of such a prior combined with the side information enables predictions for new rows and columns not observed in the training data. In this work, we combine the matrix-variate Gaussian process model with low rank constraints. The constrained Gaussian process approach is applied to the prediction of hidden associations between genes and diseases using a small set of observed associations as well as prior covariances induced by gene-gene interaction networks and disease ontologies. The proposed approach is also applied to recommender systems data which involves predicting the item ratings of users using known associations as well as prior covariances induced by social networks. We present experimental results that highlight the performance of constrained matrix-variate Gaussian process as compared to state of the art approaches in each domain.

  • Session: 23 | Room: 102 | Time: 10:10 – 10:30, Thursday 18 Sept Leave-one-out cross-validation is risk consistent for lasso (Darren Homrighausen, Daniel McDonald)

    Authors

    Darren Homrighausen
    Daniel McDonald

    Abstract

    The lasso procedure pervades the statistical and signal processing literature, and as such, is the target of substantial theoretical and applied research. While much of this research focuses on the desirable properties that lasso possesses—predictive risk consistency, sign consistency, correct model selection— these results assume that the tuning parameter is chosen in an oracle fashion. Yet, this is impossible in practice. Instead, data analysts must use the data twice, once to choose the tuning parameter and again to estimate the model. But only heuristics have ever justified such a procedure. To this end, we give the first definitive answer about the risk consistency of lasso when the smoothing parameter is chosen via cross-validation. We show that under some restrictions on the design matrix, the lasso estimator is still risk consistent with an empirically chosen tuning parameter.

  • Session: 24 | Room: 105 | Time: 10:50 – 11:10, Thursday 18 Sept On Combining Machine Learning with Decision Making (Theja Tulabandhula, Cynthia Rudin)

    Authors

    Theja Tulabandhula
    Cynthia Rudin

    Abstract

    We present a new application and covering number bound for the framework of “Machine Learning with Operational Costs (MLOC),” which is an exploratory form of decision theory. The MLOC framework incorporates knowledge about how a predictive model will be used for a subsequent task, thus combining machine learning with the decision that is made afterwards. In this work, we use the MLOC framework to study a problem that has implications for power grid reliability and maintenance, called the Machine Learning and Traveling Repair-man Problem (ML&TRP). The goal of the ML&TRP is to determine a route for a “repair crew,” which repairs nodes on a graph. The repair crew aims to minimize the cost of failures at the nodes, but as in many real situations, the failure probabilities are not known and must be estimated. The MLOC framework allows us to understand how this uncertainty influences the repair route. We also present new covering number generalization bounds for the MLOC framework.

  • Session: 28 | Room: 105 | Time: 12:00 – 12:20, Thursday 18 Sept Regression Conformal Prediction with Random Forests (Ulf Johansson, Henrik Boström, Tuve Löfström, Henrik Linusson)

    Authors

    Ulf Johansson
    Henrik Boström
    Tuve Löfström
    Henrik Linusson

    Abstract

    Regression conformal prediction produces prediction intervals that are valid, i.e., the probability of excluding the correct target value is bounded by a predefined confidence level. The most important criterion when comparing conformal regressors is efficiency; the prediction intervals should be as tight (informative) as possible. In this study, the use of random forests as the underlying model for regression conformal prediction is investigated and compared to existing state-of-the-art techniques, which are based on neural networks and k-nearest neighbors. In addition to their robust predictive performance, random forests allow for determining the size of the prediction intervals by using out-of-bag estimates instead of requiring a separate calibration set. An extensive empirical investigation, using 33 publicly available data sets, was undertaken to compare the use of random forests to existing state-of-the-art conformal predictors. The results show that the suggested approach, on almost all confidence levels and using both standard and normalized nonconformity functions, produced significantly more efficient conformal predictors than the existing alternatives.

  • Session: 30 | Room: 102 | Time: 17:20 – 17:40, Thursday 18 Sept A Theoretical and Empirical Analysis of Support Vector Machine Methods for Multiple-Instance Classification (Gary Doran, Soumya Ray)

    Authors

    Gary Doran
    Soumya Ray

    Abstract

    The standard support vector machine (SVM) formulation, widely used for supervised learning, possesses several intuitive and desirable properties. In particular, it is convex and assigns zero loss to solutions if, and only if, they correspond to consistent classifying hyperplanes with some nonzero margin. The traditional SVM formulation has been heuristically extended to multiple-instance (MI) classification in various ways. In this work, we analyze several such algorithms and observe that all MI techniques lack at least one of the desirable properties above. Further, we show that this tradeoff is fundamental, stems from the topological properties of consistent classifying hyperplanes for MI data, and is related to the computational complexity of learning MI hyperplanes. We then study the empirical consequences of this three-way tradeoff in MI classification using a large group of algorithms and datasets. We find that the experimental observations generally support our theoretical results, and properties such as the labeling task (instance versus bag labeling) influence the effects of different tradeoffs.

X