paper_url
stringlengths
35
81
arxiv_id
stringlengths
6
35
nips_id
float64
openreview_id
stringlengths
9
93
title
stringlengths
1
1.02k
abstract
stringlengths
0
56.5k
short_abstract
stringlengths
0
1.95k
url_abs
stringlengths
16
996
url_pdf
stringlengths
16
996
proceeding
stringlengths
7
1.03k
authors
listlengths
0
3.31k
tasks
listlengths
0
147
date
timestamp[ns]date
1951-09-01 00:00:00
2222-12-22 00:00:00
conference_url_abs
stringlengths
16
199
conference_url_pdf
stringlengths
21
200
conference
stringlengths
2
47
reproduces_paper
stringclasses
22 values
methods
listlengths
0
7.5k
https://paperswithcode.com/paper/dynamic-network-model-from-partial
1805.10616
null
null
Dynamic Network Model from Partial Observations
Can evolving networks be inferred and modeled without directly observing their nodes and edges? In many applications, the edges of a dynamic network might not be observed, but one can observe the dynamics of stochastic cascading processes (e.g., information diffusion, virus propagation) occurring over the unobserved network. While there have been efforts to infer networks based on such data, providing a generative probabilistic model that is able to identify the underlying time-varying network remains an open question. Here we consider the problem of inferring generative dynamic network models based on network cascade diffusion data. We propose a novel framework for providing a non-parametric dynamic network model--based on a mixture of coupled hierarchical Dirichlet processes-- based on data capturing cascade node infection times. Our approach allows us to infer the evolving community structure in networks and to obtain an explicit predictive distribution over the edges of the underlying network--including those that were not involved in transmission of any cascade, or are likely to appear in the future. We show the effectiveness of our approach using extensive experiments on synthetic as well as real-world networks.
null
http://arxiv.org/abs/1805.10616v4
http://arxiv.org/pdf/1805.10616v4.pdf
NeurIPS 2018 12
[ "Elahe Ghalebi", "Baharan Mirzasoleiman", "Radu Grosu", "Jure Leskovec" ]
[ "model", "Open-Ended Question Answering" ]
2018-05-27T00:00:00
http://papers.nips.cc/paper/8192-dynamic-network-model-from-partial-observations
http://papers.nips.cc/paper/8192-dynamic-network-model-from-partial-observations.pdf
dynamic-network-model-from-partial-1
null
[ { "code_snippet_url": null, "description": "Please enter a description about the method here", "full_name": "ooJpiued", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Language Models** are models for predicting the next word or ...
https://paperswithcode.com/paper/pac-bayes-bounds-for-stable-algorithms-with
1806.06827
null
null
PAC-Bayes bounds for stable algorithms with instance-dependent priors
PAC-Bayes bounds have been proposed to get risk estimates based on a training sample. In this paper the PAC-Bayes approach is combined with stability of the hypothesis learned by a Hilbert space valued algorithm. The PAC-Bayes setting is used with a Gaussian prior centered at the expected output. Thus a novelty of our paper is using priors defined in terms of the data-generating distribution. Our main result estimates the risk of the randomized algorithm in terms of the hypothesis stability coefficients. We also provide a new bound for the SVM classifier, which is compared to other known bounds experimentally. Ours appears to be the first stability-based bound that evaluates to non-trivial values.
null
http://arxiv.org/abs/1806.06827v2
http://arxiv.org/pdf/1806.06827v2.pdf
NeurIPS 2018 12
[ "Omar Rivasplata", "Emilio Parrado-Hernandez", "John Shawe-Taylor", "Shiliang Sun", "Csaba Szepesvari" ]
[]
2018-06-18T00:00:00
http://papers.nips.cc/paper/8134-pac-bayes-bounds-for-stable-algorithms-with-instance-dependent-priors
http://papers.nips.cc/paper/8134-pac-bayes-bounds-for-stable-algorithms-with-instance-dependent-priors.pdf
pac-bayes-bounds-for-stable-algorithms-with-1
null
[ { "code_snippet_url": "", "description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes...
https://paperswithcode.com/paper/automated-bridge-component-recognition-using
1806.06820
null
null
Automated Bridge Component Recognition using Video Data
This paper investigates the automated recognition of structural bridge components using video data. Although understanding video data for structural inspections is straightforward for human inspectors, the implementation of the same task using machine learning methods has not been fully realized. In particular, single-frame image processing techniques, such as convolutional neural networks (CNNs), are not expected to identify structural components accurately when the image is a close-up view, lacking contextual information regarding where on the structure the image originates. Inspired by the significant progress in video processing techniques, this study investigates automated bridge component recognition using video data, where the information from the past frames is used to augment the understanding of the current frame. A new simulated video dataset is created to train the machine learning algorithms. Then, convolutional Neural Networks (CNNs) with recurrent architectures are designed and applied to implement the automated bridge component recognition task. Results are presented for simulated video data, as well as video collected in the field.
null
http://arxiv.org/abs/1806.06820v2
http://arxiv.org/pdf/1806.06820v2.pdf
null
[ "Yasutaka Narazaki", "Vedhus Hoskere", "Tu A. Hoang", "Billie F. Spencer Jr" ]
[ "BIG-bench Machine Learning" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/gradient-descent-with-identity-initialization-1
1802.06093
null
null
Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks
We analyze algorithms for approximating a function $f(x) = \Phi x$ mapping $\Re^d$ to $\Re^d$ using deep linear neural networks, i.e. that learn a function $h$ parameterized by matrices $\Theta_1,...,\Theta_L$ and defined by $h(x) = \Theta_L \Theta_{L-1} ... \Theta_1 x$. We focus on algorithms that learn through gradient descent on the population quadratic loss in the case that the distribution over the inputs is isotropic. We provide polynomial bounds on the number of iterations for gradient descent to approximate the least squares matrix $\Phi$, in the case where the initial hypothesis $\Theta_1 = ... = \Theta_L = I$ has excess loss bounded by a small enough constant. On the other hand, we show that gradient descent fails to converge for $\Phi$ whose distance from the identity is a larger constant, and we show that some forms of regularization toward the identity in each layer do not help. If $\Phi$ is symmetric positive definite, we show that an algorithm that initializes $\Theta_i = I$ learns an $\epsilon$-approximation of $f$ using a number of updates polynomial in $L$, the condition number of $\Phi$, and $\log(d/\epsilon)$. In contrast, we show that if the least squares matrix $\Phi$ is symmetric and has a negative eigenvalue, then all members of a class of algorithms that perform gradient descent with identity initialization, and optionally regularize toward the identity in each layer, fail to converge. We analyze an algorithm for the case that $\Phi$ satisfies $u^{\top} \Phi u > 0$ for all $u$, but may not be symmetric. This algorithm uses two regularizers: one that maintains the invariant $u^{\top} \Theta_L \Theta_{L-1} ... \Theta_1 u > 0$ for all $u$, and another that "balances" $\Theta_1, ..., \Theta_L$ so that they have the same singular values.
null
http://arxiv.org/abs/1802.06093v4
http://arxiv.org/pdf/1802.06093v4.pdf
ICML 2018
[ "Peter L. Bartlett", "David P. Helmbold", "Philip M. Long" ]
[]
2018-02-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/temporal-coherence-based-self-supervised
1806.06811
null
null
Temporal coherence-based self-supervised learning for laparoscopic workflow analysis
In order to provide the right type of assistance at the right time, computer-assisted surgery systems need context awareness. To achieve this, methods for surgical workflow analysis are crucial. Currently, convolutional neural networks provide the best performance for video-based workflow analysis tasks. For training such networks, large amounts of annotated data are necessary. However, collecting a sufficient amount of data is often costly, time-consuming, and not always feasible. In this paper, we address this problem by presenting and comparing different approaches for self-supervised pretraining of neural networks on unlabeled laparoscopic videos using temporal coherence. We evaluate our pretrained networks on Cholec80, a publicly available dataset for surgical phase segmentation, on which a maximum F1 score of 84.6 was reached. Furthermore, we were able to achieve an increase of the F1 score of up to 10 points when compared to a non-pretrained neural network.
To achieve this, methods for surgical workflow analysis are crucial.
http://arxiv.org/abs/1806.06811v2
http://arxiv.org/pdf/1806.06811v2.pdf
null
[ "Isabel Funke", "Alexander Jenke", "Sören Torge Mees", "Jürgen Weitz", "Stefanie Speidel", "Sebastian Bodenstedt" ]
[ "Self-Supervised Learning", "Surgical phase recognition" ]
2018-06-18T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": null, "introduced_year": 2000, "main_collection": { "area": "Graphs", "description": "", "name": "Graph Representation Learning", "parent": null }, "name": "Contrastive Learning", "source_title": null...
https://paperswithcode.com/paper/better-runtime-guarantees-via-stochastic
1801.04487
null
null
Better Runtime Guarantees Via Stochastic Domination
Apart from few exceptions, the mathematical runtime analysis of evolutionary algorithms is mostly concerned with expected runtimes. In this work, we argue that stochastic domination is a notion that should be used more frequently in this area. Stochastic domination allows to formulate much more informative performance guarantees, it allows to decouple the algorithm analysis into the true algorithmic part of detecting a domination statement and the probability-theoretical part of deriving the desired probabilistic guarantees from this statement, and it helps finding simpler and more natural proofs. As particular results, we prove a fitness level theorem which shows that the runtime is dominated by a sum of independent geometric random variables, we prove the first tail bounds for several classic runtime problems, and we give a short and natural proof for Witt's result that the runtime of any $(\mu,p)$ mutation-based algorithm on any function with unique optimum is subdominated by the runtime of a variant of the \oea on the \onemax function. As side-products, we determine the fastest unbiased (1+1) algorithm for the \leadingones benchmark problem, both in the general case and when restricted to static mutation operators, and we prove a Chernoff-type tail bound for sums of independent coupon collector distributions.
null
http://arxiv.org/abs/1801.04487v5
http://arxiv.org/pdf/1801.04487v5.pdf
null
[ "Benjamin Doerr" ]
[ "Evolutionary Algorithms" ]
2018-01-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/scaling-neural-machine-translation
1806.00187
null
null
Scaling Neural Machine Translation
Sequence to sequence learning models still require several days to reach state of the art performance on large benchmark datasets using a single machine. This paper shows that reduced precision and large batch training can speedup training by nearly 5x on a single 8-GPU machine with careful tuning and implementation. On WMT'14 English-German translation, we match the accuracy of Vaswani et al. (2017) in under 5 hours when training on 8 GPUs and we obtain a new state of the art of 29.3 BLEU after training for 85 minutes on 128 GPUs. We further improve these results to 29.8 BLEU by training on the much larger Paracrawl dataset. On the WMT'14 English-French task, we obtain a state-of-the-art BLEU of 43.2 in 8.5 hours on 128 GPUs.
Sequence to sequence learning models still require several days to reach state of the art performance on large benchmark datasets using a single machine.
http://arxiv.org/abs/1806.00187v3
http://arxiv.org/pdf/1806.00187v3.pdf
WS 2018 10
[ "Myle Ott", "Sergey Edunov", "David Grangier", "Michael Auli" ]
[ "GPU", "Machine Translation", "Question Answering", "Translation" ]
2018-06-01T00:00:00
https://aclanthology.org/W18-6301
https://aclanthology.org/W18-6301.pdf
scaling-neural-machine-translation-1
null
[]
https://paperswithcode.com/paper/almost-exact-matching-with-replacement-for
1806.06802
null
null
Interpretable Almost Matching Exactly for Causal Inference
We aim to create the highest possible quality of treatment-control matches for categorical data in the potential outcomes framework. Matching methods are heavily used in the social sciences due to their interpretability, but most matching methods do not pass basic sanity checks: they fail when irrelevant variables are introduced, and tend to be either computationally slow or produce low-quality matches. The method proposed in this work aims to match units on a weighted Hamming distance, taking into account the relative importance of the covariates; the algorithm aims to match units on as many relevant variables as possible. To do this, the algorithm creates a hierarchy of covariate combinations on which to match (similar to downward closure), in the process solving an optimization problem for each unit in order to construct the optimal matches. The algorithm uses a single dynamic program to solve all of the optimization problems simultaneously. Notable advantages of our method over existing matching procedures are its high-quality matches, versatility in handling different data distributions that may have irrelevant variables, and ability to handle missing data by matching on as many available covariates as possible.
Notable advantages of our method over existing matching procedures are its high-quality matches, versatility in handling different data distributions that may have irrelevant variables, and ability to handle missing data by matching on as many available covariates as possible.
https://arxiv.org/abs/1806.06802v6
https://arxiv.org/pdf/1806.06802v6.pdf
null
[ "Yameng Liu", "Aw Dieng", "Sudeepa Roy", "Cynthia Rudin", "Alexander Volfovsky" ]
[ "Causal Inference" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/deep-spatiotemporal-representation-of-the
1806.06793
null
null
Deep Spatiotemporal Representation of the Face for Automatic Pain Intensity Estimation
Automatic pain intensity assessment has a high value in disease diagnosis applications. Inspired by the fact that many diseases and brain disorders can interrupt normal facial expression formation, we aim to develop a computational model for automatic pain intensity assessment from spontaneous and micro facial variations. For this purpose, we propose a 3D deep architecture for dynamic facial video representation. The proposed model is built by stacking several convolutional modules where each module encompasses a 3D convolution kernel with a fixed temporal depth, several parallel 3D convolutional kernels with different temporal depths, and an average pooling layer. Deploying variable temporal depths in the proposed architecture allows the model to effectively capture a wide range of spatiotemporal variations on the faces. Extensive experiments on the UNBC-McMaster Shoulder Pain Expression Archive database show that our proposed model yields in a promising performance compared to the state-of-the-art in automatic pain intensity estimation.
null
http://arxiv.org/abs/1806.06793v1
http://arxiv.org/pdf/1806.06793v1.pdf
null
[ "Mohammad Tavakolian", "Abdenour Hadid" ]
[]
2018-06-18T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/73642d9425a358b51a683cf6f95852d06cba1096/torch/nn/modules/conv.py#L421", "description": "A **3D Convolution** is a type of [convolution](https://paperswithcode.com/method/convolution) where the kernel slides in 3 dimensions as opposed to 2 dimen...
https://paperswithcode.com/paper/flexible-collaborative-estimation-of-the
1806.06784
null
null
Robust inference on the average treatment effect using the outcome highly adaptive lasso
Many estimators of the average effect of a treatment on an outcome require estimation of the propensity score, the outcome regression, or both. It is often beneficial to utilize flexible techniques such as semiparametric regression or machine learning to estimate these quantities. However, optimal estimation of these regressions does not necessarily lead to optimal estimation of the average treatment effect, particularly in settings with strong instrumental variables. A recent proposal addressed these issues via the outcome-adaptive lasso, a penalized regression technique for estimating the propensity score that seeks to minimize the impact of instrumental variables on treatment effect estimators. However, a notable limitation of this approach is that its application is restricted to parametric models. We propose a more flexible alternative that we call the outcome highly adaptive lasso. We discuss large sample theory for this estimator and propose closed form confidence intervals based on the proposed estimator. We show via simulation that our method offers benefits over several popular approaches.
null
https://arxiv.org/abs/1806.06784v3
https://arxiv.org/pdf/1806.06784v3.pdf
null
[ "Cheng Ju", "David Benkeser", "Mark J. Van Der Laan" ]
[ "regression" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/consistent-individualized-feature-attribution
1802.03888
null
null
Consistent Individualized Feature Attribution for Tree Ensembles
A unified approach to explain the output of any machine learning model.
A unified approach to explain the output of any machine learning model.
http://arxiv.org/abs/1802.03888v3
http://arxiv.org/pdf/1802.03888v3.pdf
null
[ "Scott M. Lundberg", "Gabriel G. Erion", "Su-In Lee" ]
[ "BIG-bench Machine Learning" ]
2018-02-12T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/bingan-learning-compact-binary-descriptors
1806.06778
null
null
BinGAN: Learning Compact Binary Descriptors with a Regularized GAN
In this paper, we propose a novel regularization method for Generative Adversarial Networks, which allows the model to learn discriminative yet compact binary representations of image patches (image descriptors). We employ the dimensionality reduction that takes place in the intermediate layers of the discriminator network and train binarized low-dimensional representation of the penultimate layer to mimic the distribution of the higher-dimensional preceding layers. To achieve this, we introduce two loss terms that aim at: (i) reducing the correlation between the dimensions of the binarized low-dimensional representation of the penultimate layer i. e. maximizing joint entropy) and (ii) propagating the relations between the dimensions in the high-dimensional space to the low-dimensional space. We evaluate the resulting binary image descriptors on two challenging applications, image matching and retrieval, and achieve state-of-the-art results.
In this paper, we propose a novel regularization method for Generative Adversarial Networks, which allows the model to learn discriminative yet compact binary representations of image patches (image descriptors).
http://arxiv.org/abs/1806.06778v5
http://arxiv.org/pdf/1806.06778v5.pdf
NeurIPS 2018 12
[ "Maciej Zieba", "Piotr Semberecki", "Tarek El-Gaaly", "Tomasz Trzcinski" ]
[ "Dimensionality Reduction", "Retrieval" ]
2018-06-18T00:00:00
http://papers.nips.cc/paper/7619-bingan-learning-compact-binary-descriptors-with-a-regularized-gan
http://papers.nips.cc/paper/7619-bingan-learning-compact-binary-descriptors-with-a-regularized-gan.pdf
bingan-learning-compact-binary-descriptors-1
null
[ { "code_snippet_url": null, "description": "Need help with a Lufthansa Airlines reservation, cancellation, or flight change? Speaking directly with a live Lufthansa agent at ☎️1→(855)*(200)→(2631) [US/OTA] (Live Person) who can save your time, eliminate confusion, and ensure your travel needs are met quickl...
https://paperswithcode.com/paper/multifit-a-multivariate-multiscale-framework
1806.06777
null
null
Multiscale Fisher's Independence Test for Multivariate Dependence
Identifying dependency in multivariate data is a common inference task that arises in numerous applications. However, existing nonparametric independence tests typically require computation that scales at least quadratically with the sample size, making it difficult to apply them to massive data. Moreover, resampling is usually necessary to evaluate the statistical significance of the resulting test statistics at finite sample sizes, further worsening the computational burden. We introduce a scalable, resampling-free approach to testing the independence between two random vectors by breaking down the task into simple univariate tests of independence on a collection of 2x2 contingency tables constructed through sequential coarse-to-fine discretization of the sample space, transforming the inference task into a multiple testing problem that can be completed with almost linear complexity with respect to the sample size. To address increasing dimensionality, we introduce a coarse-to-fine sequential adaptive procedure that exploits the spatial features of dependency structures to more effectively examine the sample space. We derive a finite-sample theory that guarantees the inferential validity of our adaptive procedure at any given sample size. In particular, we show that our approach can achieve strong control of the family-wise error rate without resampling or large-sample approximation. We demonstrate the substantial computational advantage of the procedure in comparison to existing approaches as well as its decent statistical power under various dependency scenarios through an extensive simulation study, and illustrate how the divide-and-conquer nature of the procedure can be exploited to not just test independence but to learn the nature of the underlying dependency. Finally, we demonstrate the use of our method through analyzing a large data set from a flow cytometry experiment.
Identifying dependency in multivariate data is a common inference task that arises in numerous applications.
https://arxiv.org/abs/1806.06777v7
https://arxiv.org/pdf/1806.06777v7.pdf
null
[ "Shai Gorsky", "Li Ma" ]
[]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/kernel-based-outlier-detection-using-the
1806.06775
null
null
Kernel-based Outlier Detection using the Inverse Christoffel Function
Outlier detection methods have become increasingly relevant in recent years due to increased security concerns and because of its vast application to different fields. Recently, Pauwels and Lasserre (2016) noticed that the sublevel sets of the inverse Christoffel function accurately depict the shape of a cloud of data using a sum-of-squares polynomial and can be used to perform outlier detection. In this work, we propose a kernelized variant of the inverse Christoffel function that makes it computationally tractable for data sets with a large number of features. We compare our approach to current methods on 15 different data sets and achieve the best average area under the precision recall curve (AUPRC) score, the best average rank and the lowest root mean square deviation.
null
http://arxiv.org/abs/1806.06775v1
http://arxiv.org/pdf/1806.06775v1.pdf
null
[ "Armin Askari", "Forest Yang", "Laurent El Ghaoui" ]
[ "Outlier Detection" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/kid-net-convolution-networks-for-kidney
1806.06769
null
null
Kid-Net: Convolution Networks for Kidney Vessels Segmentation from CT-Volumes
Semantic image segmentation plays an important role in modeling patient-specific anatomy. We propose a convolution neural network, called Kid-Net, along with a training schema to segment kidney vessels: artery, vein and collecting system. Such segmentation is vital during the surgical planning phase in which medical decisions are made before surgical incision. Our main contribution is developing a training schema that handles unbalanced data, reduces false positives and enables high-resolution segmentation with a limited memory budget. These objectives are attained using dynamic weighting, random sampling and 3D patch segmentation. Manual medical image annotation is both time-consuming and expensive. Kid-Net reduces kidney vessels segmentation time from matter of hours to minutes. It is trained end-to-end using 3D patches from volumetric CT-images. A complete segmentation for a 512x512x512 CT-volume is obtained within a few minutes (1-2 mins) by stitching the output 3D patches together. Feature down-sampling and up-sampling are utilized to achieve higher classification and localization accuracies. Quantitative and qualitative evaluation results on a challenging testing dataset show Kid-Net competence.
null
http://arxiv.org/abs/1806.06769v1
http://arxiv.org/pdf/1806.06769v1.pdf
null
[ "Ahmed Taha", "Pechin Lo", "Junning Li", "Tao Zhao" ]
[ "Anatomy", "Image Segmentation", "Segmentation", "Semantic Segmentation" ]
2018-06-18T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a...
https://paperswithcode.com/paper/modularity-matters-learning-invariant
1806.06765
null
null
Modularity Matters: Learning Invariant Relational Reasoning Tasks
We focus on two supervised visual reasoning tasks whose labels encode a semantic relational rule between two or more objects in an image: the MNIST Parity task and the colorized Pentomino task. The objects in the images undergo random translation, scaling, rotation and coloring transformations. Thus these tasks involve invariant relational reasoning. We report uneven performance of various deep CNN models on these two tasks. For the MNIST Parity task, we report that the VGG19 model soundly outperforms a family of ResNet models. Moreover, the family of ResNet models exhibits a general sensitivity to random initialization for the MNIST Parity task. For the colorized Pentomino task, now both the VGG19 and ResNet models exhibit sluggish optimization and very poor test generalization, hovering around 30% test error. The CNN we tested all learn hierarchies of fully distributed features and thus encode the distributed representation prior. We are motivated by a hypothesis from cognitive neuroscience which posits that the human visual cortex is modularized, and this allows the visual cortex to learn higher order invariances. To this end, we consider a modularized variant of the ResNet model, referred to as a Residual Mixture Network (ResMixNet) which employs a mixture-of-experts architecture to interleave distributed representations with more specialized, modular representations. We show that very shallow ResMixNets are capable of learning each of the two tasks well, attaining less than 2% and 1% test error on the MNIST Parity and the colorized Pentomino tasks respectively. Most importantly, the ResMixNet models are extremely parameter efficient: generalizing better than various non-modular CNNs that have over 10x the number of parameters. These experimental results support the hypothesis that modularity is a robust prior for learning invariant relational reasoning.
null
http://arxiv.org/abs/1806.06765v1
http://arxiv.org/pdf/1806.06765v1.pdf
null
[ "Jason Jo", "Vikas Verma", "Yoshua Bengio" ]
[ "Mixture-of-Experts", "Relational Reasoning", "Visual Reasoning" ]
2018-06-18T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - me...
https://paperswithcode.com/paper/closing-the-generalization-gap-of-adaptive
1806.06763
null
null
Closing the Generalization Gap of Adaptive Gradient Methods in Training Deep Neural Networks
Adaptive gradient methods, which adopt historical gradient information to automatically adjust the learning rate, despite the nice property of fast convergence, have been observed to generalize worse than stochastic gradient descent (SGD) with momentum in training deep neural networks. This leaves how to close the generalization gap of adaptive gradient methods an open problem. In this work, we show that adaptive gradient methods such as Adam, Amsgrad, are sometimes "over adapted". We design a new algorithm, called Partially adaptive momentum estimation method, which unifies the Adam/Amsgrad with SGD by introducing a partial adaptive parameter $p$, to achieve the best from both worlds. We also prove the convergence rate of our proposed algorithm to a stationary point in the stochastic nonconvex optimization setting. Experiments on standard benchmarks show that our proposed algorithm can maintain a fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks. These results would suggest practitioners pick up adaptive gradient methods once again for faster training of deep neural networks.
Experiments on standard benchmarks show that our proposed algorithm can maintain a fast convergence rate as Adam/Amsgrad while generalizing as well as SGD in training deep neural networks.
https://arxiv.org/abs/1806.06763v3
https://arxiv.org/pdf/1806.06763v3.pdf
null
[ "Jinghui Chen", "Dongruo Zhou", "Yiqi Tang", "Ziyan Yang", "Yuan Cao", "Quanquan Gu" ]
[]
2018-06-18T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/paultsw/nice_pytorch/blob/15cfc543fc3dc81ee70398b8dfc37b67269ede95/nice/layers.py#L109", "description": "**Affine Coupling** is a method for implementing a normalizing flow (where we stack a sequence of invertible bijective transformation functions). Affine coupling...
https://paperswithcode.com/paper/a-memory-network-approach-for-story-based
1805.02838
null
null
A Memory Network Approach for Story-based Temporal Summarization of 360° Videos
We address the problem of story-based temporal summarization of long 360{\deg} videos. We propose a novel memory network model named Past-Future Memory Network (PFMN), in which we first compute the scores of 81 normal field of view (NFOV) region proposals cropped from the input 360{\deg} video, and then recover a latent, collective summary using the network with two external memories that store the embeddings of previously selected subshots and future candidate subshots. Our major contributions are two-fold. First, our work is the first to address story-based temporal summarization of 360{\deg} videos. Second, our model is the first attempt to leverage memory networks for video summarization tasks. For evaluation, we perform three sets of experiments. First, we investigate the view selection capability of our model on the Pano2Vid dataset. Second, we evaluate the temporal summarization with a newly collected 360{\deg} video dataset. Finally, we experiment our model's performance in another domain, with image-based storytelling VIST dataset. We verify that our model achieves state-of-the-art performance on all the tasks.
null
http://arxiv.org/abs/1805.02838v3
http://arxiv.org/pdf/1805.02838v3.pdf
CVPR 2018
[ "Sang-ho Lee", "Jinyoung Sung", "Youngjae Yu", "Gunhee Kim" ]
[ "Video Summarization" ]
2018-05-08T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/aykutaaykut/Memory-Networks", "description": "A **Memory Network** provides a memory component that can be read from and written to with the inference capabilities of a neural network model. The motivation is that many neural networks lack a long-term memory compone...
https://paperswithcode.com/paper/pots-protective-optimization-technologies
1806.02711
null
null
POTs: Protective Optimization Technologies
Algorithmic fairness aims to address the economic, moral, social, and political impact that digital systems have on populations through solutions that can be applied by service providers. Fairness frameworks do so, in part, by mapping these problems to a narrow definition and assuming the service providers can be trusted to deploy countermeasures. Not surprisingly, these decisions limit fairness frameworks' ability to capture a variety of harms caused by systems. We characterize fairness limitations using concepts from requirements engineering and from social sciences. We show that the focus on algorithms' inputs and outputs misses harms that arise from systems interacting with the world; that the focus on bias and discrimination omits broader harms on populations and their environments; and that relying on service providers excludes scenarios where they are not cooperative or intentionally adversarial. We propose Protective Optimization Technologies (POTs). POTs provide means for affected parties to address the negative impacts of systems in the environment, expanding avenues for political contestation. POTs intervene from outside the system, do not require service providers to cooperate, and can serve to correct, shift, or expose harms that systems impose on populations and their environments. We illustrate the potential and limitations of POTs in two case studies: countering road congestion caused by traffic-beating applications, and recalibrating credit scoring for loan applicants.
Fairness frameworks do so, in part, by mapping these problems to a narrow definition and assuming the service providers can be trusted to deploy countermeasures.
https://arxiv.org/abs/1806.02711v6
https://arxiv.org/pdf/1806.02711v6.pdf
null
[ "Bogdan Kulynych", "Rebekah Overdorf", "Carmela Troncoso", "Seda Gürses" ]
[ "Decision Making", "Fairness" ]
2018-06-07T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/surface-networks
1705.10819
null
null
Surface Networks
We study data-driven representations for three-dimensional triangle meshes, which are one of the prevalent objects used to represent 3D geometry. Recent works have developed models that exploit the intrinsic geometry of manifolds and graphs, namely the Graph Neural Networks (GNNs) and its spectral variants, which learn from the local metric tensor via the Laplacian operator. Despite offering excellent sample complexity and built-in invariances, intrinsic geometry alone is invariant to isometric deformations, making it unsuitable for many applications. To overcome this limitation, we propose several upgrades to GNNs to leverage extrinsic differential geometry properties of three-dimensional surfaces, increasing its modeling power. In particular, we propose to exploit the Dirac operator, whose spectrum detects principal curvature directions --- this is in stark contrast with the classical Laplace operator, which directly measures mean curvature. We coin the resulting models \emph{Surface Networks (SN)}. We prove that these models define shape representations that are stable to deformation and to discretization, and we demonstrate the efficiency and versatility of SNs on two challenging tasks: temporal prediction of mesh deformations under non-linear dynamics and generative models using a variational autoencoder framework with encoders/decoders given by SNs.
We study data-driven representations for three-dimensional triangle meshes, which are one of the prevalent objects used to represent 3D geometry.
http://arxiv.org/abs/1705.10819v2
http://arxiv.org/pdf/1705.10819v2.pdf
CVPR 2018 6
[ "Ilya Kostrikov", "Zhongshi Jiang", "Daniele Panozzo", "Denis Zorin", "Joan Bruna" ]
[ "3D geometry" ]
2017-05-30T00:00:00
http://openaccess.thecvf.com/content_cvpr_2018/html/Kostrikov_Surface_Networks_CVPR_2018_paper.html
http://openaccess.thecvf.com/content_cvpr_2018/papers/Kostrikov_Surface_Networks_CVPR_2018_paper.pdf
surface-networks-1
null
[ { "code_snippet_url": "", "description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing wit...
https://paperswithcode.com/paper/extracting-automata-from-recurrent-neural
1711.09576
null
null
Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples
We present a novel algorithm that uses exact learning and abstraction to extract a deterministic finite automaton describing the state dynamics of a given trained RNN. We do this using Angluin's L* algorithm as a learner and the trained RNN as an oracle. Our technique efficiently extracts accurate automata from trained RNNs, even when the state vectors are large and require fine differentiation.
We do this using Angluin's L* algorithm as a learner and the trained RNN as an oracle.
https://arxiv.org/abs/1711.09576v4
https://arxiv.org/pdf/1711.09576v4.pdf
ICML 2018 7
[ "Gail Weiss", "Yoav Goldberg", "Eran Yahav" ]
[]
2017-11-27T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=2276
http://proceedings.mlr.press/v80/weiss18a/weiss18a.pdf
extracting-automata-from-recurrent-neural-1
null
[]
https://paperswithcode.com/paper/unsupervised-word-segmentation-from-speech
1806.06734
null
null
Unsupervised Word Segmentation from Speech with Attention
We present a first attempt to perform attentional word segmentation directly from the speech signal, with the final goal to automatically identify lexical units in a low-resource, unwritten language (UL). Our methodology assumes a pairing between recordings in the UL with translations in a well-resourced language. It uses Acoustic Unit Discovery (AUD) to convert speech into a sequence of pseudo-phones that is segmented using neural soft-alignments produced by a neural machine translation model. Evaluation uses an actual Bantu UL, Mboshi; comparisons to monolingual and bilingual baselines illustrate the potential of attentional word segmentation for language documentation.
null
http://arxiv.org/abs/1806.06734v1
http://arxiv.org/pdf/1806.06734v1.pdf
null
[ "Pierre Godard", "Marcely Zanon-Boito", "Lucas Ondel", "Alexandre Berard", "François Yvon", "Aline Villavicencio", "Laurent Besacier" ]
[ "Acoustic Unit Discovery", "Machine Translation", "Segmentation", "Translation" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/semantically-selective-augmentation-for-deep
1806.04074
null
null
Semantically Selective Augmentation for Deep Compact Person Re-Identification
We present a deep person re-identification approach that combines semantically selective, deep data augmentation with clustering-based network compression to generate high performance, light and fast inference networks. In particular, we propose to augment limited training data via sampling from a deep convolutional generative adversarial network (DCGAN), whose discriminator is constrained by a semantic classifier to explicitly control the domain specificity of the generation process. Thereby, we encode information in the classifier network which can be utilized to steer adversarial synthesis, and which fuels our CondenseNet ID-network training. We provide a quantitative and qualitative analysis of the approach and its variants on a number of datasets, obtaining results that outperform the state-of-the-art on the LIMA dataset for long-term monitoring in indoor living spaces.
null
http://arxiv.org/abs/1806.04074v3
http://arxiv.org/pdf/1806.04074v3.pdf
null
[ "Víctor Ponce-López", "Tilo Burghardt", "Sion Hannunna", "Dima Damen", "Alessandro Masullo", "Majid Mirmehdi" ]
[ "Clustering", "Data Augmentation", "Generative Adversarial Network", "Person Re-Identification", "Specificity" ]
2018-06-11T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/assessing-robustness-of-radiomic-features-by
1806.06719
null
null
Assessing robustness of radiomic features by image perturbation
Image features need to be robust against differences in positioning, acquisition and segmentation to ensure reproducibility. Radiomic models that only include robust features can be used to analyse new images, whereas models with non-robust features may fail to predict the outcome of interest accurately. Test-retest imaging is recommended to assess robustness, but may not be available for the phenotype of interest. We therefore investigated 18 methods to determine feature robustness based on image perturbations. Test-retest and perturbation robustness were compared for 4032 features that were computed from the gross tumour volume in two cohorts with computed tomography imaging: I) 31 non-small-cell lung cancer (NSCLC) patients; II): 19 head-and-neck squamous cell carcinoma (HNSCC) patients. Robustness was measured using the intraclass correlation coefficient (1,1) (ICC). Features with ICC$\geq0.90$ were considered robust. The NSCLC cohort contained more robust features for test-retest imaging than the HNSCC cohort ($73.5\%$ vs. $34.0\%$). A perturbation chain consisting of noise addition, affine translation, volume growth/shrinkage and supervoxel-based contour randomisation identified the fewest false positive robust features (NSCLC: $3.3\%$; HNSCC: $10.0\%$). Thus, this perturbation chain may be used to assess feature robustness.
null
http://arxiv.org/abs/1806.06719v1
http://arxiv.org/pdf/1806.06719v1.pdf
null
[ "Alex Zwanenburg", "Stefan Leger", "Linda Agolli", "Karoline Pilz", "Esther G. C. Troost", "Christian Richter", "Steffen Löck" ]
[ "Translation" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/reconvnet-video-object-segmentation-with
1806.05510
null
null
ReConvNet: Video Object Segmentation with Spatio-Temporal Features Modulation
We introduce ReConvNet, a recurrent convolutional architecture for semi-supervised video object segmentation that is able to fast adapt its features to focus on any specific object of interest at inference time. Generalization to new objects never observed during training is known to be a hard task for supervised approaches that would need to be retrained. To tackle this problem, we propose a more efficient solution that learns spatio-temporal features self-adapting to the object of interest via conditional affine transformations. This approach is simple, can be trained end-to-end and does not necessarily require extra training steps at inference time. Our method shows competitive results on DAVIS2016 with respect to state-of-the art approaches that use online fine-tuning, and outperforms them on DAVIS2017. ReConvNet shows also promising results on the DAVIS-Challenge 2018 winning the $10$-th position.
null
http://arxiv.org/abs/1806.05510v2
http://arxiv.org/pdf/1806.05510v2.pdf
null
[ "Francesco Lattari", "Marco Ciccone", "Matteo Matteucci", "Jonathan Masci", "Francesco Visin" ]
[ "Object", "Position", "Semantic Segmentation", "Semi-Supervised Video Object Segmentation", "Video Object Segmentation", "Video Semantic Segmentation" ]
2018-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/tree-edit-distance-learning-via-adaptive-1
1806.05009
null
null
Tree Edit Distance Learning via Adaptive Symbol Embeddings
Metric learning has the aim to improve classification accuracy by learning a distance measure which brings data points from the same class closer together and pushes data points from different classes further apart. Recent research has demonstrated that metric learning approaches can also be applied to trees, such as molecular structures, abstract syntax trees of computer programs, or syntax trees of natural language, by learning the cost function of an edit distance, i.e. the costs of replacing, deleting, or inserting nodes in a tree. However, learning such costs directly may yield an edit distance which violates metric axioms, is challenging to interpret, and may not generalize well. In this contribution, we propose a novel metric learning approach for trees which we call embedding edit distance learning (BEDL) and which learns an edit distance indirectly by embedding the tree nodes as vectors, such that the Euclidean distance between those vectors supports class discrimination. We learn such embeddings by reducing the distance to prototypical trees from the same class and increasing the distance to prototypical trees from different classes. In our experiments, we show that BEDL improves upon the state-of-the-art in metric learning for trees on six benchmark data sets, ranging from computer science over biomedical data to a natural-language processing data set containing over 300,000 nodes.
null
http://arxiv.org/abs/1806.05009v3
http://arxiv.org/pdf/1806.05009v3.pdf
ICML 2018 7
[ "Benjamin Paaßen", "Claudio Gallicchio", "Alessio Micheli", "Barbara Hammer" ]
[ "Metric Learning" ]
2018-06-13T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=2180
http://proceedings.mlr.press/v80/paassen18a/paassen18a.pdf
tree-edit-distance-learning-via-adaptive-2
null
[]
https://paperswithcode.com/paper/towards-multi-instrument-drum-transcription
1806.06676
null
null
Towards multi-instrument drum transcription
Automatic drum transcription, a subtask of the more general automatic music transcription, deals with extracting drum instrument note onsets from an audio source. Recently, progress in transcription performance has been made using non-negative matrix factorization as well as deep learning methods. However, these works primarily focus on transcribing three drum instruments only: snare drum, bass drum, and hi-hat. Yet, for many applications, the ability to transcribe more drum instruments which make up standard drum kits used in western popular music would be desirable. In this work, convolutional and convolutional recurrent neural networks are trained to transcribe a wider range of drum instruments. First, the shortcomings of publicly available datasets in this context are discussed. To overcome these limitations, a larger synthetic dataset is introduced. Then, methods to train models using the new dataset focusing on generalization to real world data are investigated. Finally, the trained models are evaluated on publicly available datasets and results are discussed. The contributions of this work comprise: (i.) a large-scale synthetic dataset for drum transcription, (ii.) first steps towards an automatic drum transcription system that supports a larger range of instruments by evaluating and discussing training setups and the impact of datasets in this context, and (iii.) a publicly available set of trained models for drum transcription. Additional materials are available at http://ifs.tuwien.ac.at/~vogl/dafx2018
In this work, convolutional and convolutional recurrent neural networks are trained to transcribe a wider range of drum instruments.
http://arxiv.org/abs/1806.06676v2
http://arxiv.org/pdf/1806.06676v2.pdf
null
[ "Richard Vogl", "Gerhard Widmer", "Peter Knees" ]
[ "Drum Transcription", "Music Transcription" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/subword-and-crossword-units-for-ctc-acoustic
1712.06855
null
null
Subword and Crossword Units for CTC Acoustic Models
This paper proposes a novel approach to create an unit set for CTC based speech recognition systems. By using Byte Pair Encoding we learn an unit set of an arbitrary size on a given training text. In contrast to using characters or words as units this allows us to find a good trade-off between the size of our unit set and the available training data. We evaluate both Crossword units, that may span multiple word, and Subword units. By combining this approach with decoding methods using a separate language model we are able to achieve state of the art results for grapheme based CTC systems.
null
http://arxiv.org/abs/1712.06855v2
http://arxiv.org/pdf/1712.06855v2.pdf
null
[ "Thomas Zenkel", "Ramon Sanabria", "Florian Metze", "Alex Waibel" ]
[ "Language Modeling", "Language Modelling", "speech-recognition", "Speech Recognition" ]
2017-12-19T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/cardinality-leap-for-open-ended-evolution
1806.06628
null
null
Cardinality Leap for Open-Ended Evolution: Theoretical Consideration and Demonstration by "Hash Chemistry"
Open-ended evolution requires unbounded possibilities that evolving entities can explore. The cardinality of a set of those possibilities thus has a significant implication for the open-endedness of evolution. We propose that facilitating formation of higher-order entities is a generalizable, effective way to cause a "cardinality leap" in the set of possibilities that promotes open-endedness. We demonstrate this idea with a simple, proof-of-concept toy model called "Hash Chemistry" that uses a hash function as a fitness evaluator of evolving entities of any size/order. Simulation results showed that the cumulative number of unique replicating entities that appeared in evolution increased almost linearly along time without an apparent bound, demonstrating the effectiveness of the proposed cardinality leap. It was also observed that the number of individual entities involved in a single replication event gradually increased over time, indicating evolutionary appearance of higher-order entities. Moreover, these behaviors were not observed in control experiments in which fitness evaluators were replaced by random number generators. This strongly suggests that the dynamics observed in Hash Chemistry were indeed evolutionary behaviors driven by selection and adaptation taking place at multiple scales.
null
http://arxiv.org/abs/1806.06628v4
http://arxiv.org/pdf/1806.06628v4.pdf
null
[ "Hiroki Sayama" ]
[]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/warp-wavelets-with-adaptive-recursive
1711.00789
null
null
Learning Asymmetric and Local Features in Multi-Dimensional Data through Wavelets with Recursive Partitioning
Effective learning of asymmetric and local features in images and other data observed on multi-dimensional grids is a challenging objective critical for a wide range of image processing applications involving biomedical and natural images. It requires methods that are sensitive to local details while fast enough to handle massive numbers of images of ever increasing sizes. We introduce a probabilistic model-based framework that achieves these objectives by incorporating adaptivity into discrete wavelet transforms (DWT) through Bayesian hierarchical modeling, thereby allowing wavelet bases to adapt to the geometric structure of the data while maintaining the high computational scalability of wavelet methods---linear in the sample size (e.g., the resolution of an image). We derive a recursive representation of the Bayesian posterior model which leads to an exact message passing algorithm to complete learning and inference. While our framework is applicable to a range of problems including multi-dimensional signal processing, compression, and structural learning, we illustrate its work and evaluate its performance in the context of image reconstruction using real images from the ImageNet database, two widely used benchmark datasets, and a dataset from retinal optical coherence tomography and compare its performance to state-of-the-art methods based on basis transforms and deep learning.
Effective learning of asymmetric and local features in images and other data observed on multi-dimensional grids is a challenging objective critical for a wide range of image processing applications involving biomedical and natural images.
https://arxiv.org/abs/1711.00789v5
https://arxiv.org/pdf/1711.00789v5.pdf
null
[ "Meng Li", "Li Ma" ]
[ "Bayesian Inference", "Image Reconstruction" ]
2017-11-02T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/on-enhancing-speech-emotion-recognition-using
1806.06626
null
null
On Enhancing Speech Emotion Recognition using Generative Adversarial Networks
Generative Adversarial Networks (GANs) have gained a lot of attention from machine learning community due to their ability to learn and mimic an input data distribution. GANs consist of a discriminator and a generator working in tandem playing a min-max game to learn a target underlying data distribution; when fed with data-points sampled from a simpler distribution (like uniform or Gaussian distribution). Once trained, they allow synthetic generation of examples sampled from the target distribution. We investigate the application of GANs to generate synthetic feature vectors used for speech emotion recognition. Specifically, we investigate two set ups: (i) a vanilla GAN that learns the distribution of a lower dimensional representation of the actual higher dimensional feature vector and, (ii) a conditional GAN that learns the distribution of the higher dimensional feature vectors conditioned on the labels or the emotional class to which it belongs. As a potential practical application of these synthetically generated samples, we measure any improvement in a classifier's performance when the synthetic data is used along with real data for training. We perform cross-validation analyses followed by a cross-corpus study.
null
http://arxiv.org/abs/1806.06626v1
http://arxiv.org/pdf/1806.06626v1.pdf
null
[ "Saurabh Sahu", "Rahul Gupta", "Carol Espy-Wilson" ]
[ "Cross-corpus", "Emotion Recognition", "Speech Emotion Recognition" ]
2018-06-18T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a...
https://paperswithcode.com/paper/banach-wasserstein-gan
1806.06621
null
null
Banach Wasserstein GAN
Wasserstein Generative Adversarial Networks (WGANs) can be used to generate realistic samples from complicated image distributions. The Wasserstein metric used in WGANs is based on a notion of distance between individual images, which induces a notion of distance between probability distributions of images. So far the community has considered $\ell^2$ as the underlying distance. We generalize the theory of WGAN with gradient penalty to Banach spaces, allowing practitioners to select the features to emphasize in the generator. We further discuss the effect of some particular choices of underlying norms, focusing on Sobolev norms. Finally, we demonstrate a boost in performance for an appropriate choice of norm on CIFAR-10 and CelebA.
Wasserstein Generative Adversarial Networks (WGANs) can be used to generate realistic samples from complicated image distributions.
http://arxiv.org/abs/1806.06621v2
http://arxiv.org/pdf/1806.06621v2.pdf
NeurIPS 2018 12
[ "Jonas Adler", "Sebastian Lunz" ]
[]
2018-06-18T00:00:00
http://papers.nips.cc/paper/7909-banach-wasserstein-gan
http://papers.nips.cc/paper/7909-banach-wasserstein-gan.pdf
banach-wasserstein-gan-1
null
[ { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a...
https://paperswithcode.com/paper/comparison-based-random-forests
1806.06616
null
null
Comparison-Based Random Forests
Assume we are given a set of items from a general metric space, but we neither have access to the representation of the data nor to the distances between data points. Instead, suppose that we can actively choose a triplet of items (A,B,C) and ask an oracle whether item A is closer to item B or to item C. In this paper, we propose a novel random forest algorithm for regression and classification that relies only on such triplet comparisons. In the theory part of this paper, we establish sufficient conditions for the consistency of such a forest. In a set of comprehensive experiments, we then demonstrate that the proposed random forest is efficient both for classification and regression. In particular, it is even competitive with other methods that have direct access to the metric representation of the data.
null
http://arxiv.org/abs/1806.06616v1
http://arxiv.org/pdf/1806.06616v1.pdf
ICML 2018 7
[ "Siavash Haghiri", "Damien Garreau", "Ulrike Von Luxburg" ]
[ "General Classification", "regression", "Triplet" ]
2018-06-18T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=1979
http://proceedings.mlr.press/v80/haghiri18a/haghiri18a.pdf
comparison-based-random-forests-1
null
[]
https://paperswithcode.com/paper/on-multi-resident-activity-recognition-in
1806.06611
null
null
On Multi-resident Activity Recognition in Ambient Smart-Homes
Increasing attention to the research on activity monitoring in smart homes has motivated the employment of ambient intelligence to reduce the deployment cost and solve the privacy issue. Several approaches have been proposed for multi-resident activity recognition, however, there still lacks a comprehensive benchmark for future research and practical selection of models. In this paper we study different methods for multi-resident activity recognition and evaluate them on same sets of data. The experimental results show that recurrent neural network with gated recurrent units is better than other models and also considerably efficient, and that using combined activities as single labels is more effective than represent them as separate labels.
null
http://arxiv.org/abs/1806.06611v1
http://arxiv.org/pdf/1806.06611v1.pdf
null
[ "Son N. Tran", "Qing Zhang", "Mohan Karunanithi" ]
[ "Activity Recognition" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/evaluating-and-characterizing-incremental
1806.06610
null
null
Evaluating and Characterizing Incremental Learning from Non-Stationary Data
Incremental learning from non-stationary data poses special challenges to the field of machine learning. Although new algorithms have been developed for this, assessment of results and comparison of behaviors are still open problems, mainly because evaluation metrics, adapted from more traditional tasks, can be ineffective in this context. Overall, there is a lack of common testing practices. This paper thus presents a testbed for incremental non-stationary learning algorithms, based on specially designed synthetic datasets. Also, test results are reported for some well-known algorithms to show that the proposed methodology is effective at characterizing their strengths and weaknesses. It is expected that this methodology will provide a common basis for evaluating future contributions in the field.
null
http://arxiv.org/abs/1806.06610v1
http://arxiv.org/pdf/1806.06610v1.pdf
null
[ "Alejandro Cervantes", "Christian Gagné", "Pedro Isasi", "Marc Parizeau" ]
[ "Incremental Learning" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/quantized-compressive-k-means
1804.10109
null
null
Quantized Compressive K-Means
The recent framework of compressive statistical learning aims at designing tractable learning algorithms that use only a heavily compressed representation-or sketch-of massive datasets. Compressive K-Means (CKM) is such a method: it estimates the centroids of data clusters from pooled, non-linear, random signatures of the learning examples. While this approach significantly reduces computational time on very large datasets, its digital implementation wastes acquisition resources because the learning examples are compressed only after the sensing stage. The present work generalizes the sketching procedure initially defined in Compressive K-Means to a large class of periodic nonlinearities including hardware-friendly implementations that compressively acquire entire datasets. This idea is exemplified in a Quantized Compressive K-Means procedure, a variant of CKM that leverages 1-bit universal quantization (i.e. retaining the least significant bit of a standard uniform quantizer) as the periodic sketch nonlinearity. Trading for this resource-efficient signature (standard in most acquisition schemes) has almost no impact on the clustering performances, as illustrated by numerical experiments.
null
http://arxiv.org/abs/1804.10109v2
http://arxiv.org/pdf/1804.10109v2.pdf
null
[ "Vincent Schellekens", "Laurent Jacques" ]
[ "Clustering", "Quantization" ]
2018-04-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/self-attentional-acoustic-models
1803.09519
null
null
Self-Attentional Acoustic Models
Self-attention is a method of encoding sequences of vectors by relating these vectors to each-other based on pairwise similarities. These models have recently shown promising results for modeling discrete sequences, but they are non-trivial to apply to acoustic modeling due to computational and modeling issues. In this paper, we apply self-attention to acoustic modeling, proposing several improvements to mitigate these issues: First, self-attention memory grows quadratically in the sequence length, which we address through a downsampling technique. Second, we find that previous approaches to incorporate position information into the model are unsuitable and explore other representations and hybrid models to this end. Third, to stress the importance of local context in the acoustic signal, we propose a Gaussian biasing approach that allows explicit control over the context range. Experiments find that our model approaches a strong baseline based on LSTMs with network-in-network connections while being much faster to compute. Besides speed, we find that interpretability is a strength of self-attentional acoustic models, and demonstrate that self-attention heads learn a linguistically plausible division of labor.
Self-attention is a method of encoding sequences of vectors by relating these vectors to each-other based on pairwise similarities.
http://arxiv.org/abs/1803.09519v2
http://arxiv.org/pdf/1803.09519v2.pdf
null
[ "Matthias Sperber", "Jan Niehues", "Graham Neubig", "Sebastian Stüker", "Alex Waibel" ]
[]
2018-03-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/snap-ml-a-hierarchical-framework-for-machine
1803.06333
null
null
Snap ML: A Hierarchical Framework for Machine Learning
We describe a new software framework for fast training of generalized linear models. The framework, named Snap Machine Learning (Snap ML), combines recent advances in machine learning systems and algorithms in a nested manner to reflect the hierarchical architecture of modern computing systems. We prove theoretically that such a hierarchical system can accelerate training in distributed environments where intra-node communication is cheaper than inter-node communication. Additionally, we provide a review of the implementation of Snap ML in terms of GPU acceleration, pipelining, communication patterns and software architecture, highlighting aspects that were critical for achieving high performance. We evaluate the performance of Snap ML in both single-node and multi-node environments, quantifying the benefit of the hierarchical scheme and the data streaming functionality, and comparing with other widely-used machine learning software frameworks. Finally, we present a logistic regression benchmark on the Criteo Terabyte Click Logs dataset and show that Snap ML achieves the same test loss an order of magnitude faster than any of the previously reported results, including those obtained using TensorFlow and scikit-learn.
null
http://arxiv.org/abs/1803.06333v3
http://arxiv.org/pdf/1803.06333v3.pdf
NeurIPS 2018 12
[ "Celestine Dünner", "Thomas Parnell", "Dimitrios Sarigiannis", "Nikolas Ioannou", "Andreea Anghel", "Gummadi Ravi", "Madhusudanan Kandasamy", "Haralampos Pozidis" ]
[ "BIG-bench Machine Learning", "GPU" ]
2018-03-16T00:00:00
http://papers.nips.cc/paper/7309-snap-ml-a-hierarchical-framework-for-machine-learning
http://papers.nips.cc/paper/7309-snap-ml-a-hierarchical-framework-for-machine-learning.pdf
snap-ml-a-hierarchical-framework-for-machine-1
null
[ { "code_snippet_url": null, "description": "**Logistic Regression**, despite its name, is a linear model for classification rather than regression. Logistic regression is also known in the literature as logit regression, maximum-entropy classification (MaxEnt) or the log-linear classifier. In this model, th...
https://paperswithcode.com/paper/multilingual-bottleneck-features-for-subword
1803.08863
null
null
Multilingual bottleneck features for subword modeling in zero-resource languages
How can we effectively develop speech technology for languages where no transcribed data is available? Many existing approaches use no annotated resources at all, yet it makes sense to leverage information from large annotated corpora in other languages, for example in the form of multilingual bottleneck features (BNFs) obtained from a supervised speech recognition system. In this work, we evaluate the benefits of BNFs for subword modeling (feature extraction) in six unseen languages on a word discrimination task. First we establish a strong unsupervised baseline by combining two existing methods: vocal tract length normalisation (VTLN) and the correspondence autoencoder (cAE). We then show that BNFs trained on a single language already beat this baseline; including up to 10 languages results in additional improvements which cannot be matched by just adding more data from a single language. Finally, we show that the cAE can improve further on the BNFs if high-quality same-word pairs are available.
How can we effectively develop speech technology for languages where no transcribed data is available?
http://arxiv.org/abs/1803.08863v2
http://arxiv.org/pdf/1803.08863v2.pdf
null
[ "Enno Hermann", "Sharon Goldwater" ]
[ "speech-recognition", "Speech Recognition" ]
2018-03-23T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/learning-to-write-stylized-chinese-characters
1712.06424
null
null
Learning to Write Stylized Chinese Characters by Reading a Handful of Examples
Automatically writing stylized Chinese characters is an attractive yet challenging task due to its wide applicabilities. In this paper, we propose a novel framework named Style-Aware Variational Auto-Encoder (SA-VAE) to flexibly generate Chinese characters. Specifically, we propose to capture the different characteristics of a Chinese character by disentangling the latent features into content-related and style-related components. Considering of the complex shapes and structures, we incorporate the structure information as prior knowledge into our framework to guide the generation. Our framework shows a powerful one-shot/low-shot generalization ability by inferring the style component given a character with unseen style. To the best of our knowledge, this is the first attempt to learn to write new-style Chinese characters by observing only one or a few examples. Extensive experiments demonstrate its effectiveness in generating different stylized Chinese characters by fusing the feature vectors corresponding to different contents and styles, which is of significant importance in real-world applications.
null
http://arxiv.org/abs/1712.06424v3
http://arxiv.org/pdf/1712.06424v3.pdf
null
[ "Danyang Sun", "Tongzheng Ren", "Chongxun Li", "Hang Su", "Jun Zhu" ]
[]
2017-12-06T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/ipose-instance-aware-6d-pose-estimation-of
1712.01924
null
null
iPose: Instance-Aware 6D Pose Estimation of Partly Occluded Objects
We address the task of 6D pose estimation of known rigid objects from single input images in scenarios where the objects are partly occluded. Recent RGB-D-based methods are robust to moderate degrees of occlusion. For RGB inputs, no previous method works well for partly occluded objects. Our main contribution is to present the first deep learning-based system that estimates accurate poses for partly occluded objects from RGB-D and RGB input. We achieve this with a new instance-aware pipeline that decomposes 6D object pose estimation into a sequence of simpler steps, where each step removes specific aspects of the problem. The first step localizes all known objects in the image using an instance segmentation network, and hence eliminates surrounding clutter and occluders. The second step densely maps pixels to 3D object surface positions, so called object coordinates, using an encoder-decoder network, and hence eliminates object appearance. The third, and final, step predicts the 6D pose using geometric optimization. We demonstrate that we significantly outperform the state-of-the-art for pose estimation of partly occluded objects for both RGB and RGB-D input.
null
http://arxiv.org/abs/1712.01924v3
http://arxiv.org/pdf/1712.01924v3.pdf
null
[ "Omid Hosseini Jafari", "Siva Karthik Mustikovela", "Karl Pertsch", "Eric Brachmann", "Carsten Rother" ]
[ "6D Pose Estimation", "6D Pose Estimation using RGB", "Decoder", "Instance Segmentation", "Object", "Pose Estimation", "Semantic Segmentation" ]
2017-12-05T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/uncertainty-in-multitask-learning-joint
1806.06595
null
null
Uncertainty in multitask learning: joint representations for probabilistic MR-only radiotherapy planning
Multi-task neural network architectures provide a mechanism that jointly integrates information from distinct sources. It is ideal in the context of MR-only radiotherapy planning as it can jointly regress a synthetic CT (synCT) scan and segment organs-at-risk (OAR) from MRI. We propose a probabilistic multi-task network that estimates: 1) intrinsic uncertainty through a heteroscedastic noise model for spatially-adaptive task loss weighting and 2) parameter uncertainty through approximate Bayesian inference. This allows sampling of multiple segmentations and synCTs that share their network representation. We test our model on prostate cancer scans and show that it produces more accurate and consistent synCTs with a better estimation in the variance of the errors, state of the art results in OAR segmentation and a methodology for quality assurance in radiotherapy treatment planning.
null
http://arxiv.org/abs/1806.06595v1
http://arxiv.org/pdf/1806.06595v1.pdf
null
[ "Felix J. S. Bragman", "Ryutaro Tanno", "Zach Eaton-Rosen", "Wenqi Li", "David J. Hawkes", "Sebastien Ourselin", "Daniel C. Alexander", "Jamie R. McClelland", "M. Jorge Cardoso" ]
[ "Bayesian Inference" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/deep-recurrent-neural-network-for-multi
1806.06594
null
null
Deep Recurrent Neural Network for Multi-target Filtering
This paper addresses the problem of fixed motion and measurement models for multi-target filtering using an adaptive learning framework. This is performed by defining target tuples with random finite set terminology and utilisation of recurrent neural networks with a long short-term memory architecture. A novel data association algorithm compatible with the predicted tracklet tuples is proposed, enabling the update of occluded targets, in addition to assigning birth, survival and death of targets. The algorithm is evaluated over a commonly used filtering simulation scenario, with highly promising results.
null
http://arxiv.org/abs/1806.06594v2
http://arxiv.org/pdf/1806.06594v2.pdf
null
[ "Mehryar Emambakhsh", "Alessandro Bay", "Eduard Vazquez" ]
[]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/low-resource-speech-to-text-translation
1803.09164
null
null
Low-Resource Speech-to-Text Translation
Speech-to-text translation has many potential applications for low-resource languages, but the typical approach of cascading speech recognition with machine translation is often impossible, since the transcripts needed to train a speech recognizer are usually not available for low-resource languages. Recent work has found that neural encoder-decoder models can learn to directly translate foreign speech in high-resource scenarios, without the need for intermediate transcription. We investigate whether this approach also works in settings where both data and computation are limited. To make the approach efficient, we make several architectural changes, including a change from character-level to word-level decoding. We find that this choice yields crucial speed improvements that allow us to train with fewer computational resources, yet still performs well on frequent words. We explore models trained on between 20 and 160 hours of data, and find that although models trained on less data have considerably lower BLEU scores, they can still predict words with relatively high precision and recall---around 50% for a model trained on 50 hours of data, versus around 60% for the full 160 hour model. Thus, they may still be useful for some low-resource scenarios.
null
http://arxiv.org/abs/1803.09164v2
http://arxiv.org/pdf/1803.09164v2.pdf
null
[ "Sameer Bansal", "Herman Kamper", "Karen Livescu", "Adam Lopez", "Sharon Goldwater" ]
[ "Decoder", "Machine Translation", "speech-recognition", "Speech Recognition", "Speech-to-Text", "Speech-to-Text Translation", "Translation" ]
2018-03-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/computational-theories-of-curiosity-driven
1802.10546
null
null
Computational Theories of Curiosity-Driven Learning
What are the functions of curiosity? What are the mechanisms of curiosity-driven learning? We approach these questions about the living using concepts and tools from machine learning and developmental robotics. We argue that curiosity-driven learning enables organisms to make discoveries to solve complex problems with rare or deceptive rewards. By fostering exploration and discovery of a diversity of behavioural skills, and ignoring these rewards, curiosity can be efficient to bootstrap learning when there is no information, or deceptive information, about local improvement towards these problems. We also explain the key role of curiosity for efficient learning of world models. We review both normative and heuristic computational frameworks used to understand the mechanisms of curiosity in humans, conceptualizing the child as a sense-making organism. These frameworks enable us to discuss the bi-directional causal links between curiosity and learning, and to provide new hypotheses about the fundamental role of curiosity in self-organizing developmental structures through curriculum learning. We present various developmental robotics experiments that study these mechanisms in action, both supporting these hypotheses to understand better curiosity in humans and opening new research avenues in machine learning and artificial intelligence. Finally, we discuss challenges for the design of experimental paradigms for studying curiosity in psychology and cognitive neuroscience. Keywords: Curiosity, intrinsic motivation, lifelong learning, predictions, world model, rewards, free-energy principle, learning progress, machine learning, AI, developmental robotics, development, curriculum learning, self-organization.
null
http://arxiv.org/abs/1802.10546v2
http://arxiv.org/pdf/1802.10546v2.pdf
null
[ "Pierre-Yves Oudeyer" ]
[ "BIG-bench Machine Learning", "Lifelong learning" ]
2018-02-28T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/nonparametric-topic-modeling-with-neural
1806.06583
null
null
Nonparametric Topic Modeling with Neural Inference
This work focuses on combining nonparametric topic models with Auto-Encoding Variational Bayes (AEVB). Specifically, we first propose iTM-VAE, where the topics are treated as trainable parameters and the document-specific topic proportions are obtained by a stick-breaking construction. The inference of iTM-VAE is modeled by neural networks such that it can be computed in a simple feed-forward manner. We also describe how to introduce a hyper-prior into iTM-VAE so as to model the uncertainty of the prior parameter. Actually, the hyper-prior technique is quite general and we show that it can be applied to other AEVB based models to alleviate the {\it collapse-to-prior} problem elegantly. Moreover, we also propose HiTM-VAE, where the document-specific topic distributions are generated in a hierarchical manner. HiTM-VAE is even more flexible and can generate topic distributions with better variability. Experimental results on 20News and Reuters RCV1-V2 datasets show that the proposed models outperform the state-of-the-art baselines significantly. The advantages of the hyper-prior technique and the hierarchical model construction are also confirmed by experiments.
null
http://arxiv.org/abs/1806.06583v1
http://arxiv.org/pdf/1806.06583v1.pdf
null
[ "Xuefei Ning", "Yin Zheng", "Zhuxi Jiang", "Yu Wang", "Huazhong Yang", "Junzhou Huang" ]
[ "Topic Models" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/wsd-algorithm-based-on-a-new-method-of-vector
1805.09559
null
null
WSD algorithm based on a new method of vector-word contexts proximity calculation via epsilon-filtration
The problem of word sense disambiguation (WSD) is considered in the article. Given a set of synonyms (synsets) and sentences with these synonyms. It is necessary to select the meaning of the word in the sentence automatically. 1285 sentences were tagged by experts, namely, one of the dictionary meanings was selected by experts for target words. To solve the WSD-problem, an algorithm based on a new method of vector-word contexts proximity calculation is proposed. In order to achieve higher accuracy, a preliminary epsilon-filtering of words is performed, both in the sentence and in the set of synonyms. An extensive program of experiments was carried out. Four algorithms are implemented, including a new algorithm. Experiments have shown that in a number of cases the new algorithm shows better results. The developed software and the tagged corpus have an open license and are available online. Wiktionary and Wikisource are used. A brief description of this work can be viewed in slides (https://goo.gl/9ak6Gt). Video lecture in Russian on this research is available online (https://youtu.be/-DLmRkepf58).
It is necessary to select the meaning of the word in the sentence automatically.
http://arxiv.org/abs/1805.09559v2
http://arxiv.org/pdf/1805.09559v2.pdf
null
[ "Alexander Kirillov", "Natalia Krizhanovsky", "Andrew Krizhanovsky" ]
[ "Sentence", "Word Sense Disambiguation" ]
2018-05-24T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/the-kanerva-machine-a-generative-distributed
1804.01756
null
S1HlA-ZAZ
The Kanerva Machine: A Generative Distributed Memory
We present an end-to-end trained memory system that quickly adapts to new data and generates samples like them. Inspired by Kanerva's sparse distributed memory, it has a robust distributed reading and writing mechanism. The memory is analytically tractable, which enables optimal on-line compression via a Bayesian update-rule. We formulate it as a hierarchical conditional generative model, where memory provides a rich data-dependent prior distribution. Consequently, the top-down memory and bottom-up perception are combined to produce the code representing an observation. Empirically, we demonstrate that the adaptive memory significantly improves generative models trained on both the Omniglot and CIFAR datasets. Compared with the Differentiable Neural Computer (DNC) and its variants, our memory model has greater capacity and is significantly easier to train.
null
http://arxiv.org/abs/1804.01756v3
http://arxiv.org/pdf/1804.01756v3.pdf
ICLR 2018 1
[ "Yan Wu", "Greg Wayne", "Alex Graves", "Timothy Lillicrap" ]
[]
2018-04-05T00:00:00
https://openreview.net/forum?id=S1HlA-ZAZ
https://openreview.net/pdf?id=S1HlA-ZAZ
the-kanerva-machine-a-generative-distributed-1
null
[]
https://paperswithcode.com/paper/rendernet-a-deep-convolutional-network-for
1806.06575
null
null
RenderNet: A deep convolutional network for differentiable rendering from 3D shapes
Traditional computer graphics rendering pipeline is designed for procedurally generating 2D quality images from 3D shapes with high performance. The non-differentiability due to discrete operations such as visibility computation makes it hard to explicitly correlate rendering parameters and the resulting image, posing a significant challenge for inverse rendering tasks. Recent work on differentiable rendering achieves differentiability either by designing surrogate gradients for non-differentiable operations or via an approximate but differentiable renderer. These methods, however, are still limited when it comes to handling occlusion, and restricted to particular rendering effects. We present RenderNet, a differentiable rendering convolutional network with a novel projection unit that can render 2D images from 3D shapes. Spatial occlusion and shading calculation are automatically encoded in the network. Our experiments show that RenderNet can successfully learn to implement different shaders, and can be used in inverse rendering tasks to estimate shape, pose, lighting and texture from a single image.
We present RenderNet, a differentiable rendering convolutional network with a novel projection unit that can render 2D images from 3D shapes.
http://arxiv.org/abs/1806.06575v3
http://arxiv.org/pdf/1806.06575v3.pdf
NeurIPS 2018 12
[ "Thu Nguyen-Phuoc", "Chuan Li", "Stephen Balaban", "Yong-Liang Yang" ]
[ "Inverse Rendering" ]
2018-06-18T00:00:00
http://papers.nips.cc/paper/8014-rendernet-a-deep-convolutional-network-for-differentiable-rendering-from-3d-shapes
http://papers.nips.cc/paper/8014-rendernet-a-deep-convolutional-network-for-differentiable-rendering-from-3d-shapes.pdf
rendernet-a-deep-convolutional-network-for-1
null
[]
https://paperswithcode.com/paper/distributed-learning-with-compressed
1806.06573
null
null
Distributed learning with compressed gradients
Asynchronous computation and gradient compression have emerged as two key techniques for achieving scalability in distributed optimization for large-scale machine learning. This paper presents a unified analysis framework for distributed gradient methods operating with staled and compressed gradients. Non-asymptotic bounds on convergence rates and information exchange are derived for several optimization algorithms. These bounds give explicit expressions for step-sizes and characterize how the amount of asynchrony and the compression accuracy affect iteration and communication complexity guarantees. Numerical results highlight convergence properties of different gradient compression algorithms and confirm that fast convergence under limited information exchange is indeed possible.
null
http://arxiv.org/abs/1806.06573v2
http://arxiv.org/pdf/1806.06573v2.pdf
null
[ "Sarit Khirirat", "Hamid Reza Feyzmahdavian", "Mikael Johansson" ]
[ "BIG-bench Machine Learning", "Distributed Optimization" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/subgram-extending-skip-gram-word
1806.06571
null
null
SubGram: Extending Skip-gram Word Representation with Substrings
Skip-gram (word2vec) is a recent method for creating vector representations of words ("distributed word representations") using a neural network. The representation gained popularity in various areas of natural language processing, because it seems to capture syntactic and semantic information about words without any explicit supervision in this respect. We propose SubGram, a refinement of the Skip-gram model to consider also the word structure during the training process, achieving large gains on the Skip-gram original test set.
Skip-gram (word2vec) is a recent method for creating vector representations of words ("distributed word representations") using a neural network.
http://arxiv.org/abs/1806.06571v1
http://arxiv.org/pdf/1806.06571v1.pdf
null
[ "Tom Kocmi", "Ondřej Bojar" ]
[]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/learning-from-outside-the-viability-kernel
1806.06569
null
null
Learning from Outside the Viability Kernel: Why we Should Build Robots that can Fall with Grace
Despite impressive results using reinforcement learning to solve complex problems from scratch, in robotics this has still been largely limited to model-based learning with very informative reward functions. One of the major challenges is that the reward landscape often has large patches with no gradient, making it difficult to sample gradients effectively. We show here that the robot state-initialization can have a more important effect on the reward landscape than is generally expected. In particular, we show the counter-intuitive benefit of including initializations that are unviable, in other words initializing in states that are doomed to fail.
null
http://arxiv.org/abs/1806.06569v1
http://arxiv.org/pdf/1806.06569v1.pdf
null
[ "Steve Heim", "Alexander Spröwitz" ]
[ "reinforcement-learning", "Reinforcement Learning", "Reinforcement Learning (RL)" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/ista-net-interpretable-optimization-inspired
1706.07929
null
null
ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing
With the aim of developing a fast yet accurate algorithm for compressive sensing (CS) reconstruction of natural images, we combine in this paper the merits of two existing categories of CS methods: the structure insights of traditional optimization-based methods and the speed of recent network-based ones. Specifically, we propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $\ell_1$ norm CS reconstruction model. To cast ISTA into deep network form, we develop an effective strategy to solve the proximal mapping associated with the sparsity-inducing regularizer using nonlinear transforms. All the parameters in ISTA-Net (\eg nonlinear transforms, shrinkage thresholds, step sizes, etc.) are learned end-to-end, rather than being hand-crafted. Moreover, considering that the residuals of natural images are more compressible, an enhanced version of ISTA-Net in the residual domain, dubbed {ISTA-Net}$^+$, is derived to further improve CS reconstruction. Extensive CS experiments demonstrate that the proposed ISTA-Nets outperform existing state-of-the-art optimization-based and network-based CS methods by large margins, while maintaining fast computational speed. Our source codes are available: \textsl{http://jianzhang.tech/projects/ISTA-Net}.
With the aim of developing a fast yet accurate algorithm for compressive sensing (CS) reconstruction of natural images, we combine in this paper the merits of two existing categories of CS methods: the structure insights of traditional optimization-based methods and the speed of recent network-based ones.
http://arxiv.org/abs/1706.07929v2
http://arxiv.org/pdf/1706.07929v2.pdf
CVPR 2018 6
[ "Jian Zhang", "Bernard Ghanem" ]
[ "Compressive Sensing" ]
2017-06-24T00:00:00
http://openaccess.thecvf.com/content_cvpr_2018/html/Zhang_ISTA-Net_Interpretable_Optimization-Inspired_CVPR_2018_paper.html
http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhang_ISTA-Net_Interpretable_Optimization-Inspired_CVPR_2018_paper.pdf
ista-net-interpretable-optimization-inspired-1
null
[ { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key com...
https://paperswithcode.com/paper/state-gradients-for-rnn-memory-analysis
1805.04264
null
null
State Gradients for RNN Memory Analysis
We present a framework for analyzing what the state in RNNs remembers from its input embeddings. Our approach is inspired by backpropagation, in the sense that we compute the gradients of the states with respect to the input embeddings. The gradient matrix is decomposed with Singular Value Decomposition to analyze which directions in the embedding space are best transferred to the hidden state space, characterized by the largest singular values. We apply our approach to LSTM language models and investigate to what extent and for how long certain classes of words are remembered on average for a certain corpus. Additionally, the extent to which a specific property or relationship is remembered by the RNN can be tracked by comparing a vector characterizing that property with the direction(s) in embedding space that are best preserved in hidden state space.
null
http://arxiv.org/abs/1805.04264v2
http://arxiv.org/pdf/1805.04264v2.pdf
WS 2018 11
[ "Lyan Verwimp", "Hugo Van hamme", "Vincent Renkens", "Patrick Wambacq" ]
[]
2018-05-11T00:00:00
https://aclanthology.org/W18-5443
https://aclanthology.org/W18-5443.pdf
state-gradients-for-rnn-memory-analysis-1
null
[ { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277", "description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\...
https://paperswithcode.com/paper/convex-optimization-with-unbounded-nonconvex
1711.02621
null
null
Convex Optimization with Unbounded Nonconvex Oracles using Simulated Annealing
We consider the problem of minimizing a convex objective function $F$ when one can only evaluate its noisy approximation $\hat{F}$. Unless one assumes some structure on the noise, $\hat{F}$ may be an arbitrary nonconvex function, making the task of minimizing $F$ intractable. To overcome this, prior work has often focused on the case when $F(x)-\hat{F}(x)$ is uniformly-bounded. In this paper we study the more general case when the noise has magnitude $\alpha F(x) + \beta$ for some $\alpha, \beta > 0$, and present a polynomial time algorithm that finds an approximate minimizer of $F$ for this noise model. Previously, Markov chains, such as the stochastic gradient Langevin dynamics, have been used to arrive at approximate solutions to these optimization problems. However, for the noise model considered in this paper, no single temperature allows such a Markov chain to both mix quickly and concentrate near the global minimizer. We bypass this by combining "simulated annealing" with the stochastic gradient Langevin dynamics, and gradually decreasing the temperature of the chain in order to approach the global minimizer. As a corollary one can approximately minimize a nonconvex function that is close to a convex function; however, the closeness can deteriorate as one moves away from the optimum.
null
http://arxiv.org/abs/1711.02621v2
http://arxiv.org/pdf/1711.02621v2.pdf
null
[ "Oren Mangoubi", "Nisheeth K. Vishnoi" ]
[]
2017-11-07T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/incremental-sparse-bayesian-ordinal
1806.06553
null
null
Incremental Sparse Bayesian Ordinal Regression
Ordinal Regression (OR) aims to model the ordering information between different data categories, which is a crucial topic in multi-label learning. An important class of approaches to OR models the problem as a linear combination of basis functions that map features to a high dimensional non-linear space. However, most of the basis function-based algorithms are time consuming. We propose an incremental sparse Bayesian approach to OR tasks and introduce an algorithm to sequentially learn the relevant basis functions in the ordinal scenario. Our method, called Incremental Sparse Bayesian Ordinal Regression (ISBOR), automatically optimizes the hyper-parameters via the type-II maximum likelihood method. By exploiting fast marginal likelihood optimization, ISBOR can avoid big matrix inverses, which is the main bottleneck in applying basis function-based algorithms to OR tasks on large-scale datasets. We show that ISBOR can make accurate predictions with parsimonious basis functions while offering automatic estimates of the prediction uncertainty. Extensive experiments on synthetic and real word datasets demonstrate the efficiency and effectiveness of ISBOR compared to other basis function-based OR approaches.
Ordinal Regression (OR) aims to model the ordering information between different data categories, which is a crucial topic in multi-label learning.
http://arxiv.org/abs/1806.06553v1
http://arxiv.org/pdf/1806.06553v1.pdf
null
[ "Chang Li", "Maarten de Rijke" ]
[ "Multi-Label Learning", "regression" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/sniper-efficient-multi-scale-training
1805.09300
null
null
SNIPER: Efficient Multi-Scale Training
We present SNIPER, an algorithm for performing efficient multi-scale training in instance level visual recognition tasks. Instead of processing every pixel in an image pyramid, SNIPER processes context regions around ground-truth instances (referred to as chips) at the appropriate scale. For background sampling, these context-regions are generated using proposals extracted from a region proposal network trained with a short learning schedule. Hence, the number of chips generated per image during training adaptively changes based on the scene complexity. SNIPER only processes 30% more pixels compared to the commonly used single scale training at 800x1333 pixels on the COCO dataset. But, it also observes samples from extreme resolutions of the image pyramid, like 1400x2000 pixels. As SNIPER operates on resampled low resolution chips (512x512 pixels), it can have a batch size as large as 20 on a single GPU even with a ResNet-101 backbone. Therefore it can benefit from batch-normalization during training without the need for synchronizing batch-normalization statistics across GPUs. SNIPER brings training of instance level recognition tasks like object detection closer to the protocol for image classification and suggests that the commonly accepted guideline that it is important to train on high resolution images for instance level visual recognition tasks might not be correct. Our implementation based on Faster-RCNN with a ResNet-101 backbone obtains an mAP of 47.6% on the COCO dataset for bounding box detection and can process 5 images per second during inference with a single GPU. Code is available at https://github.com/MahyarNajibi/SNIPER/.
Our implementation based on Faster-RCNN with a ResNet-101 backbone obtains an mAP of 47. 6% on the COCO dataset for bounding box detection and can process 5 images per second during inference with a single GPU.
http://arxiv.org/abs/1805.09300v3
http://arxiv.org/pdf/1805.09300v3.pdf
NeurIPS 2018 12
[ "Bharat Singh", "Mahyar Najibi", "Larry S. Davis" ]
[ "GPU", "image-classification", "object-detection", "Object Detection", "Region Proposal" ]
2018-05-23T00:00:00
http://papers.nips.cc/paper/8143-sniper-efficient-multi-scale-training
http://papers.nips.cc/paper/8143-sniper-efficient-multi-scale-training.pdf
sniper-efficient-multi-scale-training-1
null
[ { "code_snippet_url": "", "description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - me...
https://paperswithcode.com/paper/constraining-the-dynamics-of-deep
1802.05680
null
null
Constraining the Dynamics of Deep Probabilistic Models
We introduce a novel generative formulation of deep probabilistic models implementing "soft" constraints on their function dynamics. In particular, we develop a flexible methodological framework where the modeled functions and derivatives of a given order are subject to inequality or equality constraints. We then characterize the posterior distribution over model and constraint parameters through stochastic variational inference. As a result, the proposed approach allows for accurate and scalable uncertainty quantification on the predictions and on all parameters. We demonstrate the application of equality constraints in the challenging problem of parameter inference in ordinary differential equation models, while we showcase the application of inequality constraints on the problem of monotonic regression of count data. The proposed approach is extensively tested in several experimental settings, leading to highly competitive results in challenging modeling applications, while offering high expressiveness, flexibility and scalability.
null
http://arxiv.org/abs/1802.05680v2
http://arxiv.org/pdf/1802.05680v2.pdf
ICML 2018 7
[ "Marco Lorenzi", "Maurizio Filippone" ]
[ "Uncertainty Quantification", "Variational Inference" ]
2018-02-15T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=2209
http://proceedings.mlr.press/v80/lorenzi18a/lorenzi18a.pdf
constraining-the-dynamics-of-deep-1
null
[]
https://paperswithcode.com/paper/a-simple-reservoir-model-of-working-memory
1806.06545
null
null
A Simple Reservoir Model of Working Memory with Real Values
The prefrontal cortex is known to be involved in many high-level cognitive functions, in particular, working memory. Here, we study to what extent a group of randomly connected units (namely an Echo State Network, ESN) can store and maintain (as output) an arbitrary real value from a streamed input, i.e. can act as a sustained working memory unit. Furthermore, we explore to what extent such an architecture can take advantage of the stored value in order to produce non-linear computations. Comparison between different architectures (with and without feedback, with and without a working memory unit) shows that an explicit memory improves the performances.
The prefrontal cortex is known to be involved in many high-level cognitive functions, in particular, working memory.
http://arxiv.org/abs/1806.06545v1
http://arxiv.org/pdf/1806.06545v1.pdf
null
[ "Anthony Strock", "Nicolas Rougier", "Xavier Hinaut" ]
[]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/segmentation-of-photovoltaic-module-cells-in
1806.06530
null
null
Segmentation of Photovoltaic Module Cells in Uncalibrated Electroluminescence Images
High resolution electroluminescence (EL) images captured in the infrared spectrum allow to visually and non-destructively inspect the quality of photovoltaic (PV) modules. Currently, however, such a visual inspection requires trained experts to discern different kinds of defects, which is time-consuming and expensive. Automated segmentation of cells is therefore a key step in automating the visual inspection workflow. In this work, we propose a robust automated segmentation method for extraction of individual solar cells from EL images of PV modules. This enables controlled studies on large amounts of data to understanding the effects of module degradation over time-a process not yet fully understood. The proposed method infers in several steps a high-level solar module representation from low-level edge features. An important step in the algorithm is to formulate the segmentation problem in terms of lens calibration by exploiting the plumbline constraint. We evaluate our method on a dataset of various solar modules types containing a total of 408 solar cells with various defects. Our method robustly solves this task with a median weighted Jaccard index of 94.47% and an $F_1$ score of 97.62%, both indicating a very high similarity between automatically segmented and ground truth solar cell masks.
Automated segmentation of cells is therefore a key step in automating the visual inspection workflow.
https://arxiv.org/abs/1806.06530v4
https://arxiv.org/pdf/1806.06530v4.pdf
null
[ "Sergiu Deitsch", "Claudia Buerhop-Lutz", "Evgenii Sovetkin", "Ansgar Steland", "Andreas Maier", "Florian Gallwitz", "Christian Riess" ]
[ "Segmentation", "Solar Cell Segmentation" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/dual-recovery-network-with-online
1701.05652
null
null
Dual Recovery Network with Online Compensation for Image Super-Resolution
Image super-resolution (SR) methods essentially lead to a loss of some high-frequency (HF) information when predicting high-resolution (HR) images from low-resolution (LR) images without using external references. To address this issue, we additionally utilize online retrieved data to facilitate image SR in a unified deep framework. A novel dual high-frequency recovery network (DHN) is proposed to predict an HR image with three parts: an LR image, an internal inferred HF (IHF) map (HF missing part inferred solely from the LR image) and an external extracted HF (EHF) map. In particular, we infer the HF information based on both the LR image and similar HR references which are retrieved online. For the EHF map, we align the references with affine transformation and then in the aligned references, part of HF signals are extracted by the proposed DHN to compensate for the HF loss. Extensive experimental results demonstrate that our DHN achieves notably better performance than state-of-the-art SR methods.
null
http://arxiv.org/abs/1701.05652v3
http://arxiv.org/pdf/1701.05652v3.pdf
null
[ "Sifeng Xia", "Wenhan Yang", "Jiaying Liu", "Zongming Guo" ]
[ "Image Super-Resolution", "Super-Resolution" ]
2017-01-20T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/hitnet-a-neural-network-with-capsules
1806.06519
null
null
HitNet: a neural network with capsules embedded in a Hit-or-Miss layer, extended with hybrid data augmentation and ghost capsules
Neural networks designed for the task of classification have become a commodity in recent years. Many works target the development of better networks, which results in a complexification of their architectures with more layers, multiple sub-networks, or even the combination of multiple classifiers. In this paper, we show how to redesign a simple network to reach excellent performances, which are better than the results reproduced with CapsNet on several datasets, by replacing a layer with a Hit-or-Miss layer. This layer contains activated vectors, called capsules, that we train to hit or miss a central capsule by tailoring a specific centripetal loss function. We also show how our network, named HitNet, is capable of synthesizing a representative sample of the images of a given class by including a reconstruction network. This possibility allows to develop a data augmentation step combining information from the data space and the feature space, resulting in a hybrid data augmentation process. In addition, we introduce the possibility for HitNet, to adopt an alternative to the true target when needed by using the new concept of ghost capsules, which is used here to detect potentially mislabeled images in the training data.
In this paper, we show how to redesign a simple network to reach excellent performances, which are better than the results reproduced with CapsNet on several datasets, by replacing a layer with a Hit-or-Miss layer.
http://arxiv.org/abs/1806.06519v1
http://arxiv.org/pdf/1806.06519v1.pdf
null
[ "Adrien Deliège", "Anthony Cioppa", "Marc Van Droogenbroeck" ]
[ "Data Augmentation" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/the-information-autoencoding-family-a
1806.06514
null
null
The Information Autoencoding Family: A Lagrangian Perspective on Latent Variable Generative Models
A large number of objectives have been proposed to train latent variable generative models. We show that many of them are Lagrangian dual functions of the same primal optimization problem. The primal problem optimizes the mutual information between latent and visible variables, subject to the constraints of accurately modeling the data distribution and performing correct amortized inference. By choosing to maximize or minimize mutual information, and choosing different Lagrange multipliers, we obtain different objectives including InfoGAN, ALI/BiGAN, ALICE, CycleGAN, beta-VAE, adversarial autoencoders, AVB, AS-VAE and InfoVAE. Based on this observation, we provide an exhaustive characterization of the statistical and computational trade-offs made by all the training objectives in this class of Lagrangian duals. Next, we propose a dual optimization method where we optimize model parameters as well as the Lagrange multipliers. This method achieves Pareto optimal solutions in terms of optimizing information and satisfying the constraints.
A large number of objectives have been proposed to train latent variable generative models.
http://arxiv.org/abs/1806.06514v2
http://arxiv.org/pdf/1806.06514v2.pdf
null
[ "Shengjia Zhao", "Jiaming Song", "Stefano Ermon" ]
[]
2018-06-18T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116", "description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a ...
https://paperswithcode.com/paper/semi-tied-units-for-efficient-gating-in-lstm
1806.06513
null
null
Semi-tied Units for Efficient Gating in LSTM and Highway Networks
Gating is a key technique used for integrating information from multiple sources by long short-term memory (LSTM) models and has recently also been applied to other models such as the highway network. Although gating is powerful, it is rather expensive in terms of both computation and storage as each gating unit uses a separate full weight matrix. This issue can be severe since several gates can be used together in e.g. an LSTM cell. This paper proposes a semi-tied unit (STU) approach to solve this efficiency issue, which uses one shared weight matrix to replace those in all the units in the same layer. The approach is termed "semi-tied" since extra parameters are used to separately scale each of the shared output values. These extra scaling factors are associated with the network activation functions and result in the use of parameterised sigmoid, hyperbolic tangent, and rectified linear unit functions. Speech recognition experiments using British English multi-genre broadcast data showed that using STUs can reduce the calculation and storage cost by a factor of three for highway networks and four for LSTMs, while giving similar word error rates to the original models.
null
http://arxiv.org/abs/1806.06513v1
http://arxiv.org/pdf/1806.06513v1.pdf
null
[ "Chao Zhang", "Philip Woodland" ]
[ "speech-recognition", "Speech Recognition" ]
2018-06-18T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "There is plenty of theoretical and empirical evidence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difficult with increasing depth and training of very deep networks remains an open problem. In this ...
https://paperswithcode.com/paper/predicting-citation-counts-with-a-neural
1806.04641
null
null
Predicting Citation Counts with a Neural Network
We here describe and present results of a simple neural network that predicts individual researchers' future citation counts based on a variety of data from the researchers' past. For publications available on the open access-server arXiv.org we find a higher predictability than previous studies.
We here describe and present results of a simple neural network that predicts individual researchers' future citation counts based on a variety of data from the researchers' past.
http://arxiv.org/abs/1806.04641v2
http://arxiv.org/pdf/1806.04641v2.pdf
null
[ "Tobias Mistele", "Tom Price", "Sabine Hossenfelder" ]
[]
2018-06-12T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/an-ensemble-of-transfer-semi-supervised-and
1806.06506
null
null
An Ensemble of Transfer, Semi-supervised and Supervised Learning Methods for Pathological Heart Sound Classification
In this work, we propose an ensemble of classifiers to distinguish between various degrees of abnormalities of the heart using Phonocardiogram (PCG) signals acquired using digital stethoscopes in a clinical setting, for the INTERSPEECH 2018 Computational Paralinguistics (ComParE) Heart Beats SubChallenge. Our primary classification framework constitutes a convolutional neural network with 1D-CNN time-convolution (tConv) layers, which uses features transferred from a model trained on the 2016 Physionet Heart Sound Database. We also employ a Representation Learning (RL) approach to generate features in an unsupervised manner using Deep Recurrent Autoencoders and use Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) classifiers. Finally, we utilize an SVM classifier on a high-dimensional segment-level feature extracted using various functionals on short-term acoustic features, i.e., Low-Level Descriptors (LLD). An ensemble of the three different approaches provides a relative improvement of 11.13% compared to our best single sub-system in terms of the Unweighted Average Recall (UAR) performance metric on the evaluation dataset.
In this work, we propose an ensemble of classifiers to distinguish between various degrees of abnormalities of the heart using Phonocardiogram (PCG) signals acquired using digital stethoscopes in a clinical setting, for the INTERSPEECH 2018 Computational Paralinguistics (ComParE) Heart Beats SubChallenge.
http://arxiv.org/abs/1806.06506v2
http://arxiv.org/pdf/1806.06506v2.pdf
null
[ "Ahmed Imtiaz Humayun", "Md. Tauhiduzzaman Khan", "Shabnam Ghaffarzadegan", "Zhe Feng", "Taufiq Hasan" ]
[ "General Classification", "Representation Learning", "Sound Classification" ]
2018-06-18T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes...
https://paperswithcode.com/paper/a-unified-strategy-for-implementing-curiosity
1806.06505
null
null
A unified strategy for implementing curiosity and empowerment driven reinforcement learning
Although there are many approaches to implement intrinsically motivated artificial agents, the combined usage of multiple intrinsic drives remains still a relatively unexplored research area. Specifically, we hypothesize that a mechanism capable of quantifying and controlling the evolution of the information flow between the agent and the environment could be the fundamental component for implementing a higher degree of autonomy into artificial intelligent agents. This paper propose a unified strategy for implementing two semantically orthogonal intrinsic motivations: curiosity and empowerment. Curiosity reward informs the agent about the relevance of a recent agent action, whereas empowerment is implemented as the opposite information flow from the agent to the environment that quantifies the agent's potential of controlling its own future. We show that an additional homeostatic drive is derived from the curiosity reward, which generalizes and enhances the information gain of a classical curious/heterostatic reinforcement learning agent. We show how a shared internal model by curiosity and empowerment facilitates a more efficient training of the empowerment function. Finally, we discuss future directions for further leveraging the interplay between these two intrinsic rewards.
null
http://arxiv.org/abs/1806.06505v1
http://arxiv.org/pdf/1806.06505v1.pdf
null
[ "Ildefons Magrans de Abril", "Ryota Kanai" ]
[ "reinforcement-learning", "Reinforcement Learning", "Reinforcement Learning (RL)" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/multi-modal-data-augmentation-for-end-to-end
1803.10299
null
null
Multi-Modal Data Augmentation for End-to-End ASR
We present a new end-to-end architecture for automatic speech recognition (ASR) that can be trained using \emph{symbolic} input in addition to the traditional acoustic input. This architecture utilizes two separate encoders: one for acoustic input and another for symbolic input, both sharing the attention and decoder parameters. We call this architecture a multi-modal data augmentation network (MMDA), as it can support multi-modal (acoustic and symbolic) input and enables seamless mixing of large text datasets with significantly smaller transcribed speech corpora during training. We study different ways of transforming large text corpora into a symbolic form suitable for training our MMDA network. Our best MMDA setup obtains small improvements on character error rate (CER), and as much as 7-10\% relative word error rate (WER) improvement over a baseline both with and without an external language model.
null
http://arxiv.org/abs/1803.10299v3
http://arxiv.org/pdf/1803.10299v3.pdf
null
[ "Adithya Renduchintala", "Shuoyang Ding", "Matthew Wiesner", "Shinji Watanabe" ]
[ "Automatic Speech Recognition", "Automatic Speech Recognition (ASR)", "Data Augmentation", "Decoder", "Language Modeling", "Language Modelling", "speech-recognition", "Speech Recognition" ]
2018-03-27T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/deforming-autoencoders-unsupervised
1806.06503
null
null
Deforming Autoencoders: Unsupervised Disentangling of Shape and Appearance
In this work we introduce Deforming Autoencoders, a generative model for images that disentangles shape from appearance in an unsupervised manner. As in the deformable template paradigm, shape is represented as a deformation between a canonical coordinate system (`template') and an observed image, while appearance is modeled in `canonical', template, coordinates, thus discarding variability due to deformations. We introduce novel techniques that allow this approach to be deployed in the setting of autoencoders and show that this method can be used for unsupervised group-wise image alignment. We show experiments with expression morphing in humans, hands, and digits, face manipulation, such as shape and appearance interpolation, as well as unsupervised landmark localization. A more powerful form of unsupervised disentangling becomes possible in template coordinates, allowing us to successfully decompose face images into shading and albedo, and further manipulate face images.
In this work we introduce Deforming Autoencoders, a generative model for images that disentangles shape from appearance in an unsupervised manner.
http://arxiv.org/abs/1806.06503v1
http://arxiv.org/pdf/1806.06503v1.pdf
ECCV 2018 9
[ "Zhixin Shu", "Mihir Sahasrabudhe", "Alp Guler", "Dimitris Samaras", "Nikos Paragios", "Iasonas Kokkinos" ]
[ "Unsupervised Facial Landmark Detection" ]
2018-06-18T00:00:00
http://openaccess.thecvf.com/content_ECCV_2018/html/Zhixin_Shu_Deforming_Autoencoders_Unsupervised_ECCV_2018_paper.html
http://openaccess.thecvf.com/content_ECCV_2018/papers/Zhixin_Shu_Deforming_Autoencoders_Unsupervised_ECCV_2018_paper.pdf
deforming-autoencoders-unsupervised-1
null
[]
https://paperswithcode.com/paper/conditional-affordance-learning-for-driving
1806.06498
null
null
Conditional Affordance Learning for Driving in Urban Environments
Most existing approaches to autonomous driving fall into one of two categories: modular pipelines, that build an extensive model of the environment, and imitation learning approaches, that map images directly to control outputs. A recently proposed third paradigm, direct perception, aims to combine the advantages of both by using a neural network to learn appropriate low-dimensional intermediate representations. However, existing direct perception approaches are restricted to simple highway situations, lacking the ability to navigate intersections, stop at traffic lights or respect speed limits. In this work, we propose a direct perception approach which maps video input to intermediate representations suitable for autonomous navigation in complex urban environments given high-level directional inputs. Compared to state-of-the-art reinforcement and conditional imitation learning approaches, we achieve an improvement of up to 68 % in goal-directed navigation on the challenging CARLA simulation benchmark. In addition, our approach is the first to handle traffic lights and speed signs by using image-level labels only, as well as smooth car-following, resulting in a significant reduction of traffic accidents in simulation.
Most existing approaches to autonomous driving fall into one of two categories: modular pipelines, that build an extensive model of the environment, and imitation learning approaches, that map images directly to control outputs.
http://arxiv.org/abs/1806.06498v3
http://arxiv.org/pdf/1806.06498v3.pdf
null
[ "Axel Sauer", "Nikolay Savinov", "Andreas Geiger" ]
[ "Autonomous Driving", "Autonomous Navigation", "Imitation Learning", "Navigate" ]
2018-06-18T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/ikostrikov/pytorch-a3c/blob/48d95844755e2c3e2c7e48bbd1a7141f7212b63f/train.py#L100", "description": "**Entropy Regularization** is a type of regularization used in [reinforcement learning](https://paperswithcode.com/methods/area/reinforcement-learning). For on-polic...
https://paperswithcode.com/paper/detecting-zero-day-controller-hijacking
1806.06496
null
null
Power-Grid Controller Anomaly Detection with Enhanced Temporal Deep Learning
Controllers of security-critical cyber-physical systems, like the power grid, are a very important class of computer systems. Attacks against the control code of a power-grid system, especially zero-day attacks, can be catastrophic. Earlier detection of the anomalies can prevent further damage. However, detecting zero-day attacks is extremely challenging because they have no known code and have unknown behavior. Furthermore, if data collected from the controller is transferred to a server through networks for analysis and detection of anomalous behavior, this creates a very large attack surface and also delays detection. In order to address this problem, we propose Reconstruction Error Distribution (RED) of Hardware Performance Counters (HPCs), and a data-driven defense system based on it. Specifically, we first train a temporal deep learning model, using only normal HPC readings from legitimate processes that run daily in these power-grid systems, to model the normal behavior of the power-grid controller. Then, we run this model using real-time data from commonly available HPCs. We use the proposed RED to enhance the temporal deep learning detection of anomalous behavior, by estimating distribution deviations from the normal behavior with an effective statistical test. Experimental results on a real power-grid controller show that we can detect anomalous behavior with high accuracy (>99.9%), nearly zero false positives and short (<360ms) latency.
null
https://arxiv.org/abs/1806.06496v3
https://arxiv.org/pdf/1806.06496v3.pdf
null
[ "Zecheng He", "Aswin Raghavan", "Guangyuan Hu", "Sek Chai", "Ruby Lee" ]
[ "Anomaly Detection", "Deep Learning" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/women-also-snowboard-overcoming-bias-in-1
1803.09797
null
null
Women also Snowboard: Overcoming Bias in Captioning Models
Most machine learning methods are known to capture and exploit biases of the training data. While some biases are beneficial for learning, others are harmful. Specifically, image captioning models tend to exaggerate biases present in training data (e.g., if a word is present in 60% of training sentences, it might be predicted in 70% of sentences at test time). This can lead to incorrect captions in domains where unbiased captions are desired, or required, due to over-reliance on the learned prior and image context. In this work we investigate generation of gender-specific caption words (e.g. man, woman) based on the person's appearance or the image context. We introduce a new Equalizer model that ensures equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present. The resulting model is forced to look at a person rather than use contextual cues to make a gender-specific predictions. The losses that comprise our model, the Appearance Confusion Loss and the Confident Loss, are general, and can be added to any description model in order to mitigate impacts of unwanted bias in a description dataset. Our proposed model has lower error than prior work when describing images with people and mentioning their gender and more closely matches the ground truth ratio of sentences including women to sentences including men. We also show that unlike other approaches, our model is indeed more often looking at people when predicting their gender.
We introduce a new Equalizer model that ensures equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present.
http://arxiv.org/abs/1803.09797v4
http://arxiv.org/pdf/1803.09797v4.pdf
ECCV 2018 9
[ "Kaylee Burns", "Lisa Anne Hendricks", "Kate Saenko", "Trevor Darrell", "Anna Rohrbach" ]
[ "Image Captioning" ]
2018-03-26T00:00:00
http://openaccess.thecvf.com/content_ECCV_2018/html/Lisa_Anne_Hendricks_Women_also_Snowboard_ECCV_2018_paper.html
http://openaccess.thecvf.com/content_ECCV_2018/papers/Lisa_Anne_Hendricks_Women_also_Snowboard_ECCV_2018_paper.pdf
women-also-snowboard-overcoming-bias-in-2
null
[]
https://paperswithcode.com/paper/boosted-density-estimation-remastered
1803.08178
null
null
Boosted Density Estimation Remastered
There has recently been a steady increase in the number iterative approaches to density estimation. However, an accompanying burst of formal convergence guarantees has not followed; all results pay the price of heavy assumptions which are often unrealistic or hard to check. The Generative Adversarial Network (GAN) literature --- seemingly orthogonal to the aforementioned pursuit --- has had the side effect of a renewed interest in variational divergence minimisation (notably $f$-GAN). We show that by introducing a weak learning assumption (in the sense of the classical boosting framework) we are able to import some recent results from the GAN literature to develop an iterative boosted density estimation algorithm, including formal convergence results with rates, that does not suffer the shortcomings other approaches. We show that the density fit is an exponential family, and as part of our analysis obtain an improved variational characterisation of $f$-GAN.
null
http://arxiv.org/abs/1803.08178v3
http://arxiv.org/pdf/1803.08178v3.pdf
null
[ "Zac Cranko", "Richard Nock" ]
[ "Density Estimation", "Generative Adversarial Network" ]
2018-03-22T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a...
https://paperswithcode.com/paper/disturbance-grassmann-kernels-for-subspace
1802.03517
null
null
Disturbance Grassmann Kernels for Subspace-Based Learning
In this paper, we focus on subspace-based learning problems, where data elements are linear subspaces instead of vectors. To handle this kind of data, Grassmann kernels were proposed to measure the space structure and used with classifiers, e.g., Support Vector Machines (SVMs). However, the existing discriminative algorithms mostly ignore the instability of subspaces, which would cause the classifiers misled by disturbed instances. Thus we propose considering all potential disturbance of subspaces in learning processes to obtain more robust classifiers. Firstly, we derive the dual optimization of linear classifiers with disturbance subject to a known distribution, resulting in a new kernel, Disturbance Grassmann (DG) kernel. Secondly, we research into two kinds of disturbance, relevant to the subspace matrix and singular values of bases, with which we extend the Projection kernel on Grassmann manifolds to two new kernels. Experiments on action data indicate that the proposed kernels perform better compared to state-of-the-art subspace-based methods, even in a worse environment.
null
http://arxiv.org/abs/1802.03517v2
http://arxiv.org/pdf/1802.03517v2.pdf
null
[ "Junyuan Hong", "Huanhuan Chen", "Feng Lin" ]
[]
2018-02-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/entity-aware-language-model-as-an
1803.04291
null
null
Entity-Aware Language Model as an Unsupervised Reranker
In language modeling, it is difficult to incorporate entity relationships from a knowledge-base. One solution is to use a reranker trained with global features, in which global features are derived from n-best lists. However, training such a reranker requires manually annotated n-best lists, which is expensive to obtain. We propose a method based on the contrastive estimation method that alleviates the need for such data. Experiments in the music domain demonstrate that global features, as well as features extracted from an external knowledge-base, can be incorporated into our reranker. Our final model, a simple ensemble of a language model and reranker, achieves a 0.44\% absolute word error rate improvement over an LSTM language model on the blind test data.
null
http://arxiv.org/abs/1803.04291v2
http://arxiv.org/pdf/1803.04291v2.pdf
null
[ "Mohammad Sadegh Rasooli", "Sarangarajan Parthasarathy" ]
[ "Language Modeling", "Language Modelling" ]
2018-03-12T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277", "description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\...
https://paperswithcode.com/paper/co-training-embeddings-of-knowledge-graphs
1806.06478
null
null
Co-training Embeddings of Knowledge Graphs and Entity Descriptions for Cross-lingual Entity Alignment
Multilingual knowledge graph (KG) embeddings provide latent semantic representations of entities and structured knowledge with cross-lingual inferences, which benefit various knowledge-driven cross-lingual NLP tasks. However, precisely learning such cross-lingual inferences is usually hindered by the low coverage of entity alignment in many KGs. Since many multilingual KGs also provide literal descriptions of entities, in this paper, we introduce an embedding-based approach which leverages a weakly aligned multilingual KG for semi-supervised cross-lingual learning using entity descriptions. Our approach performs co-training of two embedding models, i.e. a multilingual KG embedding model and a multilingual literal description embedding model. The models are trained on a large Wikipedia-based trilingual dataset where most entity alignment is unknown to training. Experimental results show that the performance of the proposed approach on the entity alignment task improves at each iteration of co-training, and eventually reaches a stage at which it significantly surpasses previous approaches. We also show that our approach has promising abilities for zero-shot entity alignment, and cross-lingual KG completion.
null
http://arxiv.org/abs/1806.06478v1
http://arxiv.org/pdf/1806.06478v1.pdf
null
[ "Muhao Chen", "Yingtao Tian", "Kai-Wei Chang", "Steven Skiena", "Carlo Zaniolo" ]
[ "Entity Alignment", "Knowledge Graphs" ]
2018-06-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/video-salient-object-detection-using
1708.01447
null
null
Video Salient Object Detection Using Spatiotemporal Deep Features
This paper presents a method for detecting salient objects in videos where temporal information in addition to spatial information is fully taken into account. Following recent reports on the advantage of deep features over conventional hand-crafted features, we propose a new set of SpatioTemporal Deep (STD) features that utilize local and global contexts over frames. We also propose new SpatioTemporal Conditional Random Field (STCRF) to compute saliency from STD features. STCRF is our extension of CRF to the temporal domain and describes the relationships among neighboring regions both in a frame and over frames. STCRF leads to temporally consistent saliency maps over frames, contributing to the accurate detection of salient objects' boundaries and noise reduction during detection. Our proposed method first segments an input video into multiple scales and then computes a saliency map at each scale level using STD features with STCRF. The final saliency map is computed by fusing saliency maps at different scale levels. Our experiments, using publicly available benchmark datasets, confirm that the proposed method significantly outperforms state-of-the-art methods. We also applied our saliency computation to the video object segmentation task, showing that our method outperforms existing video object segmentation methods.
null
http://arxiv.org/abs/1708.01447v3
http://arxiv.org/pdf/1708.01447v3.pdf
null
[ "Trung-Nghia Le", "Akihiro Sugimoto" ]
[ "Object", "object-detection", "Object Detection", "RGB Salient Object Detection", "Salient Object Detection", "Semantic Segmentation", "Video Object Segmentation", "Video Salient Object Detection", "Video Semantic Segmentation" ]
2017-08-04T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "**Conditional Random Fields** or **CRFs** are a type of probabilistic graph model that take neighboring sample context into account for tasks like classification. Prediction is modeled as a graphical model, which implements dependencies between the predictions. Gr...
https://paperswithcode.com/paper/reinforcement-learning-in-rich-observation
1611.03907
null
null
Reinforcement Learning in Rich-Observation MDPs using Spectral Methods
Reinforcement learning (RL) in Markov decision processes (MDPs) with large state spaces is a challenging problem. The performance of standard RL algorithms degrades drastically with the dimensionality of state space. However, in practice, these large MDPs typically incorporate a latent or hidden low-dimensional structure. In this paper, we study the setting of rich-observation Markov decision processes (ROMDP), where there are a small number of hidden states which possess an injective mapping to the observation states. In other words, every observation state is generated through a single hidden state, and this mapping is unknown a priori. We introduce a spectral decomposition method that consistently learns this mapping, and more importantly, achieves it with low regret. The estimated mapping is integrated into an optimistic RL algorithm (UCRL), which operates on the estimated hidden space. We derive finite-time regret bounds for our algorithm with a weak dependence on the dimensionality of the observed space. In fact, our algorithm asymptotically achieves the same average regret as the oracle UCRL algorithm, which has the knowledge of the mapping from hidden to observed spaces. Thus, we derive an efficient spectral RL algorithm for ROMDPs.
null
http://arxiv.org/abs/1611.03907v4
http://arxiv.org/pdf/1611.03907v4.pdf
null
[ "Kamyar Azizzadenesheli", "Alessandro Lazaric", "Animashree Anandkumar" ]
[ "reinforcement-learning", "Reinforcement Learning", "Reinforcement Learning (RL)" ]
2016-11-11T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/breaking-transferability-of-adversarial
1805.04613
null
null
Breaking Transferability of Adversarial Samples with Randomness
We investigate the role of transferability of adversarial attacks in the observed vulnerabilities of Deep Neural Networks (DNNs). We demonstrate that introducing randomness to the DNN models is sufficient to defeat adversarial attacks, given that the adversary does not have an unlimited attack budget. Instead of making one specific DNN model robust to perfect knowledge attacks (a.k.a, white box attacks), creating randomness within an army of DNNs completely eliminates the possibility of perfect knowledge acquisition, resulting in a significantly more robust DNN ensemble against the strongest form of attacks. We also show that when the adversary has an unlimited budget of data perturbation, all defensive techniques would eventually break down as the budget increases. Therefore, it is important to understand the game saddle point where the adversary would not further pursue this endeavor. Furthermore, we explore the relationship between attack severity and decision boundary robustness in the version space. We empirically demonstrate that by simply adding a small Gaussian random noise to the learned weights, a DNN model can increase its resilience to adversarial attacks by as much as 74.2%. More importantly, we show that by randomly activating/revealing a model from a pool of pre-trained DNNs at each query request, we can put a tremendous strain on the adversary's attack strategies. We compare our randomization techniques to the Ensemble Adversarial Training technique and show that our randomization techniques are superior under different attack budget constraints.
null
http://arxiv.org/abs/1805.04613v2
http://arxiv.org/pdf/1805.04613v2.pdf
null
[ "Yan Zhou", "Murat Kantarcioglu", "Bowei Xi" ]
[]
2018-05-11T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/the-rbo-dataset-of-articulated-objects-and
1806.06465
null
null
The RBO Dataset of Articulated Objects and Interactions
We present a dataset with models of 14 articulated objects commonly found in human environments and with RGB-D video sequences and wrenches recorded of human interactions with them. The 358 interaction sequences total 67 minutes of human manipulation under varying experimental conditions (type of interaction, lighting, perspective, and background). Each interaction with an object is annotated with the ground truth poses of its rigid parts and the kinematic state obtained by a motion capture system. For a subset of 78 sequences (25 minutes), we also measured the interaction wrenches. The object models contain textured three-dimensional triangle meshes of each link and their motion constraints. We provide Python scripts to download and visualize the data. The data is available at https://tu-rbo.github.io/articulated-objects/ and hosted at https://zenodo.org/record/1036660/.
null
http://arxiv.org/abs/1806.06465v1
http://arxiv.org/pdf/1806.06465v1.pdf
null
[ "Roberto Martín-Martín", "Clemens Eppner", "Oliver Brock" ]
[]
2018-06-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/learning-policy-representations-in-multiagent
1806.06464
null
null
Learning Policy Representations in Multiagent Systems
Modeling agent behavior is central to understanding the emergence of complex phenomena in multiagent systems. Prior work in agent modeling has largely been task-specific and driven by hand-engineering domain-specific prior knowledge. We propose a general learning framework for modeling agent behavior in any multiagent system using only a handful of interaction data. Our framework casts agent modeling as a representation learning problem. Consequently, we construct a novel objective inspired by imitation learning and agent identification and design an algorithm for unsupervised learning of representations of agent policies. We demonstrate empirically the utility of the proposed framework in (i) a challenging high-dimensional competitive environment for continuous control and (ii) a cooperative environment for communication, on supervised predictive tasks, unsupervised clustering, and policy optimization using deep reinforcement learning.
null
http://arxiv.org/abs/1806.06464v2
http://arxiv.org/pdf/1806.06464v2.pdf
ICML 2018 7
[ "Aditya Grover", "Maruan Al-Shedivat", "Jayesh K. Gupta", "Yura Burda", "Harrison Edwards" ]
[ "Clustering", "continuous-control", "Continuous Control", "Deep Reinforcement Learning", "Imitation Learning", "reinforcement-learning", "Reinforcement Learning", "Reinforcement Learning (RL)", "Representation Learning" ]
2018-06-17T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=2435
http://proceedings.mlr.press/v80/grover18a/grover18a.pdf
learning-policy-representations-in-multiagent-1
null
[]
https://paperswithcode.com/paper/sub-gaussian-estimators-of-the-mean-of-a-1
1605.07129
null
null
Sub-Gaussian estimators of the mean of a random matrix with heavy-tailed entries
Estimation of the covariance matrix has attracted a lot of attention of the statistical research community over the years, partially due to important applications such as Principal Component Analysis. However, frequently used empirical covariance estimator (and its modifications) is very sensitive to outliers in the data. As P. J. Huber wrote in 1964, "...This raises a question which could have been asked already by Gauss, but which was, as far as I know, only raised a few years ago (notably by Tukey): what happens if the true distribution deviates slightly from the assumed normal one? As is now well known, the sample mean then may have a catastrophically bad performance..." Motivated by this question, we develop a new estimator of the (element-wise) mean of a random matrix, which includes covariance estimation problem as a special case. Assuming that the entries of a matrix possess only finite second moment, this new estimator admits sub-Gaussian or sub-exponential concentration around the unknown mean in the operator norm. We will explain the key ideas behind our construction, as well as applications to covariance estimation and matrix completion problems.
null
http://arxiv.org/abs/1605.07129v5
http://arxiv.org/pdf/1605.07129v5.pdf
null
[ "Stanislav Minsker" ]
[ "Matrix Completion" ]
2016-05-23T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/fast-convex-pruning-of-deep-neural-networks
1806.06457
null
null
Fast Convex Pruning of Deep Neural Networks
We develop a fast, tractable technique called Net-Trim for simplifying a trained neural network. The method is a convex post-processing module, which prunes (sparsifies) a trained network layer by layer, while preserving the internal responses. We present a comprehensive analysis of Net-Trim from both the algorithmic and sample complexity standpoints, centered on a fast, scalable convex optimization program. Our analysis includes consistency results between the initial and retrained models before and after Net-Trim application and guarantees on the number of training samples needed to discover a network that can be expressed using a certain number of nonzero terms. Specifically, if there is a set of weights that uses at most $s$ terms that can re-create the layer outputs from the layer inputs, we can find these weights from $\mathcal{O}(s\log N/s)$ samples, where $N$ is the input size. These theoretical results are similar to those for sparse regression using the Lasso, and our analysis uses some of the same recently-developed tools (namely recent results on the concentration of measure and convex analysis). Finally, we propose an algorithmic framework based on the alternating direction method of multipliers (ADMM), which allows a fast and simple implementation of Net-Trim for network pruning and compression.
We develop a fast, tractable technique called Net-Trim for simplifying a trained neural network.
http://arxiv.org/abs/1806.06457v2
http://arxiv.org/pdf/1806.06457v2.pdf
null
[ "Alireza Aghasi", "Afshin Abdi", "Justin Romberg" ]
[ "Network Pruning" ]
2018-06-17T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Pruning", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Model Compression", "parent": null }, "name": "Pruning", "source_title": "Pruning Filters for Ef...
https://paperswithcode.com/paper/cross-modality-image-synthesis-from-unpaired
1803.06629
null
null
Cross-modality image synthesis from unpaired data using CycleGAN: Effects of gradient consistency loss and training data size
CT is commonly used in orthopedic procedures. MRI is used along with CT to identify muscle structures and diagnose osteonecrosis due to its superior soft tissue contrast. However, MRI has poor contrast for bone structures. Clearly, it would be helpful if a corresponding CT were available, as bone boundaries are more clearly seen and CT has standardized (i.e., Hounsfield) units. Therefore, we aim at MR-to-CT synthesis. The CycleGAN was successfully applied to unpaired CT and MR images of the head, these images do not have as much variation of intensity pairs as do images in the pelvic region due to the presence of joints and muscles. In this paper, we extended the CycleGAN approach by adding the gradient consistency loss to improve the accuracy at the boundaries. We conducted two experiments. To evaluate image synthesis, we investigated dependency of image synthesis accuracy on 1) the number of training data and 2) the gradient consistency loss. To demonstrate the applicability of our method, we also investigated a segmentation accuracy on synthesized images.
null
http://arxiv.org/abs/1803.06629v3
http://arxiv.org/pdf/1803.06629v3.pdf
null
[ "Yuta Hiasa", "Yoshito Otake", "Masaki Takao", "Takumi Matsuoka", "Kazuma Takashima", "Jerry L. Prince", "Nobuhiko Sugano", "Yoshinobu Sato" ]
[ "Image Generation" ]
2018-03-18T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116", "description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a ...
https://paperswithcode.com/paper/self-attentive-neural-collaborative-filtering
1806.06446
null
null
Self-Attentive Neural Collaborative Filtering
This paper has been withdrawn as we discovered a bug in our tensorflow implementation that involved accidental mixing of vectors across batches. This lead to different inference results given different batch sizes which is completely strange. The performance scores still remain the same but we concluded that it was not the self-attention that contributed to the performance. We are withdrawing the paper because this renders the main claim of the paper false. Thanks to Guan Xinyu from NUS for discovering this issue in our previously open source code.
null
http://arxiv.org/abs/1806.06446v2
http://arxiv.org/pdf/1806.06446v2.pdf
null
[ "Yi Tay", "Shuai Zhang", "Luu Anh Tuan", "Siu Cheung Hui" ]
[ "Collaborative Filtering" ]
2018-06-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/ncrf-an-open-source-neural-sequence-labeling
1806.05626
null
null
NCRF++: An Open-source Neural Sequence Labeling Toolkit
This paper describes NCRF++, a toolkit for neural sequence labeling. NCRF++ is designed for quick implementation of different neural sequence labeling models with a CRF inference layer. It provides users with an inference for building the custom model structure through configuration file with flexible neural feature design and utilization. Built on PyTorch, the core operations are calculated in batch, making the toolkit efficient with the acceleration of GPU. It also includes the implementations of most state-of-the-art neural sequence labeling models such as LSTM-CRF, facilitating reproducing and refinement on those methods.
This paper describes NCRF++, a toolkit for neural sequence labeling.
http://arxiv.org/abs/1806.05626v2
http://arxiv.org/pdf/1806.05626v2.pdf
ACL 2018 7
[ "Jie Yang", "Yue Zhang" ]
[ "Chunking", "GPU", "Named Entity Recognition (NER)", "Part-Of-Speech Tagging" ]
2018-06-14T00:00:00
https://aclanthology.org/P18-4013
https://aclanthology.org/P18-4013.pdf
ncrf-an-open-source-neural-sequence-labeling-1
null
[ { "code_snippet_url": null, "description": "**Conditional Random Fields** or **CRFs** are a type of probabilistic graph model that take neighboring sample context into account for tasks like classification. Prediction is modeled as a graphical model, which implements dependencies between the predictions. Gr...
https://paperswithcode.com/paper/predicting-switching-graph-labelings-with
1806.06439
null
null
Online Prediction of Switching Graph Labelings with Cluster Specialists
We address the problem of predicting the labeling of a graph in an online setting when the labeling is changing over time. We present an algorithm based on a specialist approach; we develop the machinery of cluster specialists which probabilistically exploits the cluster structure in the graph. Our algorithm has two variants, one of which surprisingly only requires $\mathcal{O}(\log n)$ time on any trial $t$ on an $n$-vertex graph, an exponential speed up over existing methods. We prove switching mistake-bound guarantees for both variants of our algorithm. Furthermore these mistake bounds smoothly vary with the magnitude of the change between successive labelings. We perform experiments on Chicago Divvy Bicycle Sharing data and show that our algorithms significantly outperform an existing algorithm (a kernelized Perceptron) as well as several natural benchmarks.
We address the problem of predicting the labeling of a graph in an online setting when the labeling is changing over time.
https://arxiv.org/abs/1806.06439v3
https://arxiv.org/pdf/1806.06439v3.pdf
NeurIPS 2019 12
[ "Mark Herbster", "James Robinson" ]
[]
2018-06-17T00:00:00
http://papers.nips.cc/paper/8923-online-prediction-of-switching-graph-labelings-with-cluster-specialists
http://papers.nips.cc/paper/8923-online-prediction-of-switching-graph-labelings-with-cluster-specialists.pdf
online-prediction-of-switching-graph
null
[ { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key com...
https://paperswithcode.com/paper/compressed-sensing-with-deep-image-prior-and
1806.06438
null
Hkl_sAVtwr
Compressed Sensing with Deep Image Prior and Learned Regularization
We propose a novel method for compressed sensing recovery using untrained deep generative models. Our method is based on the recently proposed Deep Image Prior (DIP), wherein the convolutional weights of the network are optimized to match the observed measurements. We show that this approach can be applied to solve any differentiable linear inverse problem, outperforming previous unlearned methods. Unlike various learned approaches based on generative models, our method does not require pre-training over large datasets. We further introduce a novel learned regularization technique, which incorporates prior information on the network weights. This reduces reconstruction error, especially for noisy measurements. Finally, we prove that, using the DIP optimization approach, moderately overparameterized single-layer networks can perfectly fit any signal despite the non-convex nature of the fitting problem. This theoretical result provides justification for early stopping.
We propose a novel method for compressed sensing recovery using untrained deep generative models.
https://arxiv.org/abs/1806.06438v4
https://arxiv.org/pdf/1806.06438v4.pdf
null
[ "Dave Van Veen", "Ajil Jalal", "Mahdi Soltanolkotabi", "Eric Price", "Sriram Vishwanath", "Alexandros G. Dimakis" ]
[ "compressed sensing" ]
2018-06-17T00:00:00
https://openreview.net/forum?id=Hkl_sAVtwr
https://openreview.net/pdf?id=Hkl_sAVtwr
null
null
[]
https://paperswithcode.com/paper/subspace-embedding-and-linear-regression-with
1806.06430
null
null
Subspace Embedding and Linear Regression with Orlicz Norm
We consider a generalization of the classic linear regression problem to the case when the loss is an Orlicz norm. An Orlicz norm is parameterized by a non-negative convex function $G:\mathbb{R}_+\rightarrow\mathbb{R}_+$ with $G(0)=0$: the Orlicz norm of a vector $x\in\mathbb{R}^n$ is defined as $ \|x\|_G=\inf\left\{\alpha>0\large\mid\sum_{i=1}^n G(|x_i|/\alpha)\leq 1\right\}. $ We consider the cases where the function $G(\cdot)$ grows subquadratically. Our main result is based on a new oblivious embedding which embeds the column space of a given matrix $A\in\mathbb{R}^{n\times d}$ with Orlicz norm into a lower dimensional space with $\ell_2$ norm. Specifically, we show how to efficiently find an embedding matrix $S\in\mathbb{R}^{m\times n},m<n$ such that $\forall x\in\mathbb{R}^{d},\Omega(1/(d\log n)) \cdot \|Ax\|_G\leq \|SAx\|_2\leq O(d^2\log n) \cdot \|Ax\|_G.$ By applying this subspace embedding technique, we show an approximation algorithm for the regression problem $\min_{x\in\mathbb{R}^d} \|Ax-b\|_G$, up to a $O(d\log^2 n)$ factor. As a further application of our techniques, we show how to also use them to improve on the algorithm for the $\ell_p$ low rank matrix approximation problem for $1\leq p<2$.
null
http://arxiv.org/abs/1806.06430v1
http://arxiv.org/pdf/1806.06430v1.pdf
ICML 2018 7
[ "Alexandr Andoni", "Chengyu Lin", "Ying Sheng", "Peilin Zhong", "Ruiqi Zhong" ]
[ "regression" ]
2018-06-17T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=2451
http://proceedings.mlr.press/v80/andoni18a/andoni18a.pdf
subspace-embedding-and-linear-regression-with-1
null
[ { "code_snippet_url": null, "description": "**Linear Regression** is a method for modelling a relationship between a dependent variable and independent variables. These models can be fit with numerous approaches. The most common is *least squares*, where we minimize the mean square error between the predict...
https://paperswithcode.com/paper/scalable-methods-for-8-bit-training-of-neural
1805.11046
null
null
Scalable Methods for 8-bit Training of Neural Networks
Quantized Neural Networks (QNNs) are often used to improve network efficiency during the inference phase, i.e. after the network has been trained. Extensive research in the field suggests many different quantization schemes. Still, the number of bits required, as well as the best quantization scheme, are yet unknown. Our theoretical analysis suggests that most of the training process is robust to substantial precision reduction, and points to only a few specific operations that require higher precision. Armed with this knowledge, we quantize the model parameters, activations and layer gradients to 8-bit, leaving at a higher precision only the final step in the computation of the weight gradients. Additionally, as QNNs require batch-normalization to be trained at high precision, we introduce Range Batch-Normalization (BN) which has significantly higher tolerance to quantization noise and improved computational complexity. Our simulations show that Range BN is equivalent to the traditional batch norm if a precise scale adjustment, which can be approximated analytically, is applied. To the best of the authors' knowledge, this work is the first to quantize the weights, activations, as well as a substantial volume of the gradients stream, in all layers (including batch normalization) to 8-bit while showing state-of-the-art results over the ImageNet-1K dataset.
Armed with this knowledge, we quantize the model parameters, activations and layer gradients to 8-bit, leaving at a higher precision only the final step in the computation of the weight gradients.
http://arxiv.org/abs/1805.11046v3
http://arxiv.org/pdf/1805.11046v3.pdf
NeurIPS 2018 12
[ "Ron Banner", "Itay Hubara", "Elad Hoffer", "Daniel Soudry" ]
[ "Quantization" ]
2018-05-25T00:00:00
http://papers.nips.cc/paper/7761-scalable-methods-for-8-bit-training-of-neural-networks
http://papers.nips.cc/paper/7761-scalable-methods-for-8-bit-training-of-neural-networks.pdf
scalable-methods-for-8-bit-training-of-neural-1
null
[]
https://paperswithcode.com/paper/a-novel-hybrid-machine-learning-model-for
1806.06423
null
null
A Novel Hybrid Machine Learning Model for Auto-Classification of Retinal Diseases
Automatic clinical diagnosis of retinal diseases has emerged as a promising approach to facilitate discovery in areas with limited access to specialists. We propose a novel visual-assisted diagnosis hybrid model based on the support vector machine (SVM) and deep neural networks (DNNs). The model incorporates complementary strengths of DNNs and SVM. Furthermore, we present a new clinical retina label collection for ophthalmology incorporating 32 retina diseases classes. Using EyeNet, our model achieves 89.73% diagnosis accuracy and the model performance is comparable to the professional ophthalmologists.
Automatic clinical diagnosis of retinal diseases has emerged as a promising approach to facilitate discovery in areas with limited access to specialists.
http://arxiv.org/abs/1806.06423v1
http://arxiv.org/pdf/1806.06423v1.pdf
null
[ "C. -H. Huck Yang", "Jia-Hong Huang", "Fangyu Liu", "Fang-Yi Chiu", "Mengya Gao", "Weifeng Lyu", "I-Hung Lin M. D.", "Jesper Tegner" ]
[ "BIG-bench Machine Learning", "General Classification", "Hybrid Machine Learning" ]
2018-06-17T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes...
https://paperswithcode.com/paper/learning-to-evaluate-image-captioning
1806.06422
null
null
Learning to Evaluate Image Captioning
Evaluation metrics for image captioning face two challenges. Firstly, commonly used metrics such as CIDEr, METEOR, ROUGE and BLEU often do not correlate well with human judgments. Secondly, each metric has well known blind spots to pathological caption constructions, and rule-based metrics lack provisions to repair such blind spots once identified. For example, the newly proposed SPICE correlates well with human judgments, but fails to capture the syntactic structure of a sentence. To address these two challenges, we propose a novel learning based discriminative evaluation metric that is directly trained to distinguish between human and machine-generated captions. In addition, we further propose a data augmentation scheme to explicitly incorporate pathological transformations as negative examples during training. The proposed metric is evaluated with three kinds of robustness tests and its correlation with human judgments. Extensive experiments show that the proposed data augmentation scheme not only makes our metric more robust toward several pathological transformations, but also improves its correlation with human judgments. Our metric outperforms other metrics on both caption level human correlation in Flickr 8k and system level human correlation in COCO. The proposed approach could be served as a learning based evaluation metric that is complementary to existing rule-based metrics.
To address these two challenges, we propose a novel learning based discriminative evaluation metric that is directly trained to distinguish between human and machine-generated captions.
http://arxiv.org/abs/1806.06422v1
http://arxiv.org/pdf/1806.06422v1.pdf
CVPR 2018 6
[ "Yin Cui", "Guandao Yang", "Andreas Veit", "Xun Huang", "Serge Belongie" ]
[ "8k", "Data Augmentation", "Image Captioning", "Sentence" ]
2018-06-17T00:00:00
http://openaccess.thecvf.com/content_cvpr_2018/html/Cui_Learning_to_Evaluate_CVPR_2018_paper.html
http://openaccess.thecvf.com/content_cvpr_2018/papers/Cui_Learning_to_Evaluate_CVPR_2018_paper.pdf
learning-to-evaluate-image-captioning-1
null
[]
https://paperswithcode.com/paper/high-speed-tracking-with-multi-kernel
1806.06418
null
null
High-speed Tracking with Multi-kernel Correlation Filters
Correlation filter (CF) based trackers are currently ranked top in terms of their performances. Nevertheless, only some of them, such as KCF~\cite{henriques15} and MKCF~\cite{tangm15}, are able to exploit the powerful discriminability of non-linear kernels. Although MKCF achieves more powerful discriminability than KCF through introducing multi-kernel learning (MKL) into KCF, its improvement over KCF is quite limited and its computational burden increases significantly in comparison with KCF. In this paper, we will introduce the MKL into KCF in a different way than MKCF. We reformulate the MKL version of CF objective function with its upper bound, alleviating the negative mutual interference of different kernels significantly. Our novel MKCF tracker, MKCFup, outperforms KCF and MKCF with large margins and can still work at very high fps. Extensive experiments on public datasets show that our method is superior to state-of-the-art algorithms for target objects of small move at very high speed.
In this paper, we will introduce the MKL into KCF in a different way than MKCF.
http://arxiv.org/abs/1806.06418v1
http://arxiv.org/pdf/1806.06418v1.pdf
CVPR 2018 6
[ "Ming Tang", "Bin Yu", "Fan Zhang", "Jinqiao Wang" ]
[ "Video Object Tracking", "Vocal Bursts Intensity Prediction" ]
2018-06-17T00:00:00
http://openaccess.thecvf.com/content_cvpr_2018/html/Tang_High-Speed_Tracking_With_CVPR_2018_paper.html
http://openaccess.thecvf.com/content_cvpr_2018/papers/Tang_High-Speed_Tracking_With_CVPR_2018_paper.pdf
high-speed-tracking-with-multi-kernel-1
null
[]
https://paperswithcode.com/paper/feature-learning-and-classification-in
1806.06415
null
null
Feature Learning and Classification in Neuroimaging: Predicting Cognitive Impairment from Magnetic Resonance Imaging
Due to the rapid innovation of technology and the desire to find and employ biomarkers for neurodegenerative disease, high-dimensional data classification problems are routinely encountered in neuroimaging studies. To avoid over-fitting and to explore relationships between disease and potential biomarkers, feature learning and selection plays an important role in classifier construction and is an important area in machine learning. In this article, we review several important feature learning and selection techniques including lasso-based methods, PCA, the two-sample t-test, and stacked auto-encoders. We compare these approaches using a numerical study involving the prediction of Alzheimer's disease from Magnetic Resonance Imaging.
null
http://arxiv.org/abs/1806.06415v1
http://arxiv.org/pdf/1806.06415v1.pdf
null
[ "Shan Shi", "Farouk Nathoo" ]
[ "BIG-bench Machine Learning", "General Classification" ]
2018-06-17T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "**Principle Components Analysis (PCA)** is an unsupervised method primary used for dimensionality reduction within machine learning. PCA is calculated via a singular value decomposition (SVD) of the design matrix, or alternatively, by calculating the covariance m...
https://paperswithcode.com/paper/one-to-one-mapping-between-stimulus-and
1805.09001
null
null
One-to-one Mapping between Stimulus and Neural State: Memory and Classification
Synaptic strength can be seen as probability to propagate impulse, and according to synaptic plasticity, function could exist from propagation activity to synaptic strength. If the function satisfies constraints such as continuity and monotonicity, neural network under external stimulus will always go to fixed point, and there could be one-to-one mapping between external stimulus and synaptic strength at fixed point. In other words, neural network "memorizes" external stimulus in its synapses. A biological classifier is proposed to utilize this mapping.
Synaptic strength can be seen as probability to propagate impulse, and according to synaptic plasticity, function could exist from propagation activity to synaptic strength.
http://arxiv.org/abs/1805.09001v6
http://arxiv.org/pdf/1805.09001v6.pdf
null
[ "Sizhong Lan" ]
[ "General Classification" ]
2018-05-23T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/negative-learning-rates-and-p-learning
1603.08253
null
null
Negative Learning Rates and P-Learning
We present a method of training a differentiable function approximator for a regression task using negative examples. We effect this training using negative learning rates. We also show how this method can be used to perform direct policy learning in a reinforcement learning setting.
null
http://arxiv.org/abs/1603.08253v3
http://arxiv.org/pdf/1603.08253v3.pdf
null
[ "Devon Merrill" ]
[ "regression", "reinforcement-learning", "Reinforcement Learning", "Reinforcement Learning (RL)" ]
2016-03-27T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/measuring-semantic-coherence-of-a
1806.06411
null
null
Measuring Semantic Coherence of a Conversation
Conversational systems have become increasingly popular as a way for humans to interact with computers. To be able to provide intelligent responses, conversational systems must correctly model the structure and semantics of a conversation. We introduce the task of measuring semantic (in)coherence in a conversation with respect to background knowledge, which relies on the identification of semantic relations between concepts introduced during a conversation. We propose and evaluate graph-based and machine learning-based approaches for measuring semantic coherence using knowledge graphs, their vector space embeddings and word embedding models, as sources of background knowledge. We demonstrate how these approaches are able to uncover different coherence patterns in conversations on the Ubuntu Dialogue Corpus.
Conversational systems have become increasingly popular as a way for humans to interact with computers.
http://arxiv.org/abs/1806.06411v1
http://arxiv.org/pdf/1806.06411v1.pdf
null
[ "Svitlana Vakulenko", "Maarten de Rijke", "Michael Cochez", "Vadim Savenkov", "Axel Polleres" ]
[ "Knowledge Graphs" ]
2018-06-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/learning-a-prior-over-intent-via-meta-inverse
1805.12573
null
null
Learning a Prior over Intent via Meta-Inverse Reinforcement Learning
A significant challenge for the practical application of reinforcement learning in the real world is the need to specify an oracle reward function that correctly defines a task. Inverse reinforcement learning (IRL) seeks to avoid this challenge by instead inferring a reward function from expert behavior. While appealing, it can be impractically expensive to collect datasets of demonstrations that cover the variation common in the real world (e.g. opening any type of door). Thus in practice, IRL must commonly be performed with only a limited set of demonstrations where it can be exceedingly difficult to unambiguously recover a reward function. In this work, we exploit the insight that demonstrations from other tasks can be used to constrain the set of possible reward functions by learning a "prior" that is specifically optimized for the ability to infer expressive reward functions from limited numbers of demonstrations. We demonstrate that our method can efficiently recover rewards from images for novel tasks and provide intuition as to how our approach is analogous to learning a prior.
null
https://arxiv.org/abs/1805.12573v5
https://arxiv.org/pdf/1805.12573v5.pdf
null
[ "Kelvin Xu", "Ellis Ratner", "Anca Dragan", "Sergey Levine", "Chelsea Finn" ]
[ "reinforcement-learning", "Reinforcement Learning", "Reinforcement Learning (RL)" ]
2018-05-31T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/gated-path-planning-networks
1806.06408
null
null
Gated Path Planning Networks
Value Iteration Networks (VINs) are effective differentiable path planning modules that can be used by agents to perform navigation while still maintaining end-to-end differentiability of the entire architecture. Despite their effectiveness, they suffer from several disadvantages including training instability, random seed sensitivity, and other optimization problems. In this work, we reframe VINs as recurrent-convolutional networks which demonstrates that VINs couple recurrent convolutions with an unconventional max-pooling activation. From this perspective, we argue that standard gated recurrent update equations could potentially alleviate the optimization issues plaguing VIN. The resulting architecture, which we call the Gated Path Planning Network, is shown to empirically outperform VIN on a variety of metrics such as learning speed, hyperparameter sensitivity, iteration count, and even generalization. Furthermore, we show that this performance gap is consistent across different maze transition types, maze sizes and even show success on a challenging 3D environment, where the planner is only provided with first-person RGB images.
Value Iteration Networks (VINs) are effective differentiable path planning modules that can be used by agents to perform navigation while still maintaining end-to-end differentiability of the entire architecture.
http://arxiv.org/abs/1806.06408v1
http://arxiv.org/pdf/1806.06408v1.pdf
ICML 2018 7
[ "Lisa Lee", "Emilio Parisotto", "Devendra Singh Chaplot", "Eric Xing", "Ruslan Salakhutdinov" ]
[ "Sensitivity" ]
2018-06-17T00:00:00
https://icml.cc/Conferences/2018/Schedule?showEvent=2488
http://proceedings.mlr.press/v80/lee18c/lee18c.pdf
gated-path-planning-networks-1
null
[]
https://paperswithcode.com/paper/an-improved-text-sentiment-classification
1806.06407
null
null
An Improved Text Sentiment Classification Model Using TF-IDF and Next Word Negation
With the rapid growth of Text sentiment analysis, the demand for automatic classification of electronic documents has increased by leaps and bound. The paradigm of text classification or text mining has been the subject of many research works in recent time. In this paper we propose a technique for text sentiment classification using term frequency- inverse document frequency (TF-IDF) along with Next Word Negation (NWN). We have also compared the performances of binary bag of words model, TF-IDF model and TF-IDF with next word negation (TF-IDF-NWN) model for text classification. Our proposed model is then applied on three different text mining algorithms and we found the Linear Support vector machine (LSVM) is the most appropriate to work with our proposed model. The achieved results show significant increase in accuracy compared to earlier methods.
null
http://arxiv.org/abs/1806.06407v1
http://arxiv.org/pdf/1806.06407v1.pdf
null
[ "Bijoyan Das", "Sarit Chakraborty" ]
[ "Classification", "General Classification", "Negation", "Sentiment Analysis", "Sentiment Classification", "text-classification", "Text Classification" ]
2018-06-17T00:00:00
null
null
null
null
[]