圖靈之心:機器學習論文彙總 20170324

圖靈之心:機器學習論文彙總 20170324

圖靈之心(TuringKey)目前已實現“機器學習”,”自然語言處理”,”人工智能”,”機器視覺”,”機器人”五個方向的論文收集整理工作,每日更新T-2日的論文信息,包括標題,作者,研究方向,英文摘要,中文摘要(機器翻譯)。及時獲取論文信息,請關注圖靈之心。

圖靈之心:機器學習論文彙總 20170324

Paper 1 : Rejection-free Ensemble MCMC with applications to Factorial Hidden Markov Models

Authors : Märtens Kaspar, Titsias Michalis K, Yau Christopher

Subject : Computation, Methodology, Machine Learning

Submitted Date : 20170324

Abstract : Bayesian inference for complex models is challenging due to the need toexplore high-dimensional spaces and multimodality and standard Monte Carlosamplers can have difficulties effectively exploring the posterior. Weintroduce a general purpose rejection-free ensemble Markov Chain Monte Carlo(MCMC) technique to improve on existing poorly mixing samplers. This isachieved by combining parallel tempering and an auxiliary variable move toexchange information between the chains. We demonstrate this ensemble MCMCscheme on Bayesian inference in Factorial Hidden Markov Models. Thishigh-dimensional inference problem is difficult due to the exponentially sizedlatent variable space. Existing sampling approaches mix slowly and can gettrapped in local modes. We show that the performance of these samplers isimproved by our rejection-free ensemble technique and that the method isattractive and “easy-to-use” since no parameter tuning is required.

Abstract_CN : 對於複雜的模型的貝葉斯推斷是由於需要探索高維空間和多模態和標準Monte Carlosamplers可以有困難的有效探索後的挑戰。介紹了一種通用的排斥自由集成馬爾可夫鏈蒙特卡羅(MCMC)技術來改善現有的不良混合採樣。這是通過結合並行回火和輔助變量之間移動交換信息鏈。我們證明這個集合mcmcscheme對因子隱馬爾可夫模型的貝葉斯推理。由於指數sizedlatent高維變量空間推理問題是困難的。現有的採樣方法,可以gettrapped慢慢混合在本地模式。我們發現,這些採樣了我們拒絕自由集成技術的性能,和“易用”的方法吸引人因為沒有參數的調整是必需的。

圖靈之心:機器學習論文彙總 20170324


Paper 2 : Asymmetric Learning Vector Quantization for Efficient Nearest Neighbor Classification in Dynamic Time Warping Spaces

Authors : Jain Brijnesh, Schultz David

Subject : Learning, Machine Learning

Submitted Date : 20170324

Abstract : The nearest neighbor method together with the dynamic time warping (DTW)distance is one of the most popular approaches in time series classification.This method suffers from high storage and computation requirements for largetraining sets. As a solution to both drawbacks, this article extends learningvector quantization (LVQ) from Euclidean spaces to DTW spaces. The proposed LVQscheme uses asymmetric weighted averaging as update rule. Empirical resultsexhibited superior performance of asymmetric generalized LVQ (GLVQ) over otherstate-of-the-art prototype generation methods for nearest neighborclassification.

Abstract_CN : 最近鄰方法和動態時間彎曲(DTW)距離是時間序列分類最常用的方法。這種方法受到高的存儲和計算要求largetraining集。為解決這兩個弊端,本文擴展learningvector量化(LVQ)從Euclidean空間到DTW的空間。該lvqscheme使用非對稱加權平均作為更新規則。實證resultsexhibited非對稱廣義LVQ(GLVQ)性能優越的藝術原型生成方法、最近鄰法其他。

圖靈之心:機器學習論文彙總 20170324


Paper 3 : Smart Augmentation - Learning an Optimal Data Augmentation Strategy

Authors : Lemley Joseph, Bazrafkan Shabab, Corcoran Peter

Subject : Artificial Intelligence, Learning, Machine Learning

Submitted Date : 20170324

Abstract : A recurring problem faced when training neural networks is that there istypically not enough data to maximize the generalization capability of deepneural networks(DNN). There are many techniques to address this, including dataaugmentation, dropout, and transfer learning. In this paper, we introduce anadditional method which we call Smart Augmentation and we show how to use it toincrease the accuracy and reduce overfitting on a target network. SmartAugmentation works by creating a network that learns how to generate augmenteddata during the training process of a target network in a way that reduces thatnetworks loss. This allows us to learn augmentations that minimize the error ofthat network.

Smart Augmentation has shown the potential to increase accuracy bydemonstrably significant measures on all datasets tested. In addition, it hasshown potential to achieve similar or improved performance levels withsignificantly smaller network sizes in a number of tested cases.

Abstract_CN : 一個反覆出現的問題時所面臨的訓練神經網絡,那裡通常是沒有足夠的數據來最大限度地提高網絡的泛化能力deepneural(DNN)。有許多方法來解決這個問題,包括dataaugmentation,輟學,和遷移學習。在本文中,我們介紹了額外的方法我們稱之為智能的增強,我們將展示如何使用它來提高精度和減少過度擬合目標網絡。smartaugmentation通過創建一個網絡,學會如何在某種程度上,降低了thatnetworks損失目標網絡在訓練過程中產生augmenteddata。這可以讓我們學習的擴增,減少誤差,網絡。

智能增強顯示增加準確性bydemonstrably重大措施對所有測試數據集的潛力。此外,它有潛力實現類似或更好的性能水平具有較小的網絡規模在一些測試案例。

圖靈之心:機器學習論文彙總 20170324


Paper 4 : A Nonconvex Splitting Method for Symmetric Nonnegative Matrix Factorization: Convergence Analysis and Optimality

Authors : Lu Songtao, Hong Mingyi, Wang Zhengdao

Subject : Optimization and Control, Machine Learning

Submitted Date : 20170324

Abstract : Symmetric nonnegative matrix factorization (SymNMF) has importantapplications in data analytics problems such as document clustering, communitydetection and image segmentation. In this paper, we propose a novel nonconvexvariable splitting method for solving SymNMF. The proposed algorithm isguaranteed to converge to the set of Karush-Kuhn-Tucker (KKT) points of thenonconvex SymNMF problem. Furthermore, it achieves a global sublinearconvergence rate. We also show that the algorithm can be efficientlyimplemented in parallel. Further, sufficient conditions are provided whichguarantee the global and local optimality of the obtained solutions. Extensivenumerical results performed on both synthetic and real data sets suggest thatthe proposed algorithm converges quickly to a local minimum solution.

Abstract_CN : 對稱非負矩陣分解(symnmf)已經在諸如文檔聚類的應用數據分析問題,社區挖掘和圖像分割。在本文中,我們提出了一個新的解決symnmf nonconvexvariable分裂方法。該算法保證收斂於Karush Kuhn Tucker集(KKT)的thenonconvex symnmf問題點。此外,它實現了全球sublinearconvergence率。我們還表明,該算法可以efficientlyimplemented並行。此外,提供足夠的條件保證方程的全局和局部獲得的解決方案的最優性。大量的數值結果對合成和真實數據集驗證了本文提出的算法可以快速收斂到局部最小解行。

圖靈之心(TuringKey)目前已實現“機器學習”,”自然語言處理”,”人工智能”,”機器視覺”,”機器人”五個方向的論文收集整理工作,每日更新T-2日的論文信息,包括標題,作者,研究方向,英文摘要,中文摘要(機器翻譯)。及時獲取論文信息,請關注圖靈之心。

圖靈之心:機器學習論文彙總 20170324

相關推薦

推薦中...