Autoencoders explained in one page

Autoencoders represent an amazing trick to learn the structure within the input data. In a neural network, learning the weights of the hidden layer to represent the input is an approach that tries to discover the geometry of the data, if any. When the hidden units are fewer than the dimensions of the input data, autoencoders resemble Principal Component Analysis PCA. The main difference between the two is that the non-linear function of autoencoders can capture the non-linearity within the data (if any).
Something that is not possible to achieve with PCA.


About these ads

About Piggy

I am Piggy and I spend my life reading about math and of course eating. I love science and I support my flatmate who provides me problems to solve and, well, food.
This entry was posted in General and tagged , , , , , , , , , . Bookmark the permalink.

One Response to Autoencoders explained in one page

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s