Autoencoders represent an amazing trick to learn the structure within the input data. In a neural network, learning the weights of the hidden layer to represent the input is an approach that tries to discover the geometry of the data, if any. When the hidden units are fewer than the dimensions of the input data, autoencoders resemble Principal Component Analysis PCA. The main difference between the two is that the non-linear function of autoencoders can capture the non-linearity within the data (if any).
Something that is not possible to achieve with PCA.
Search here:
-
Recent Posts
- Too big to fail. Or not so big.
- Autoencoders explained in one page
- When should I use Apache Hadoop
- Adaptive Boosting explained in one slide
- Big Data. Everybody knows what it is. Except that nobody actually does.
- The philosophy behind Statistics
- How to export all R packages from home to work (or anywhere else)
- Switch from JAGS to PyMC. Now.
Blogging about
analysis appeal bayesian bayesian statistics big data BUGS chaos combinatorial computer differential equation distribution economy fitting genetics gibbs google graph hypothesis testing inference lasso logistic love math modeling network p-value PCA Principal Component Analysis probability R research science software statistics theoryTwitter
- Are we far from correctly inferring gene interaction networks with Lasso? buff.ly/1GzKTUk #regression #analytics #genetics #network 9 hours ago
- "The PhD Misconception" buff.ly/1BkYFnO #blogsscience #feedly 1 day ago
- A nice sunday reading. Yet another time about time buff.ly/1FMlyId #physics #time #space 1 day ago
Archives
Your opinion about
Follow me
Reblogged this on Artificial Intelligence Matters.