Log in

No account? Create an account
Surfing the Metaverse
Recent Entries 
11th-Sep-2017 10:29 pm - Steve Keen Crowd Funding

Steve Keen is now primary funded via Patreon (at time of writing he's taking 25% of his salary from Kingston University).

Please consider chipping in via:


I have recently carried out some basic research exploring the relative
merits of eight alternative activation functions (+ the logistic
function to make nine total) in a NEAT context.

The motivation for this work was originally to identify activation
functions with increased execution speed relative to the logistic
function, however, the tests highlighted two important qualities of
activation functions other than their speed of execution (1) zero
centering, (2) Unbounded range for positive input values.

More details here:

Just for a laugh / out of curiosity...

Bitcoin Market Capitalisation
This essentially ended up in the bin but I figured I'd summarise where I got to for possible future reference.

Comparison of the Backpropagation Network and a Generative Binary Stochastic Graphical Model

I also tried a log-log scale, but the trend on that scale curves upwards slightly, so I left it as is.

Seems like an 'obvious' pattern anyways.
Just wondering if this has been tried.

I.e. instead of regularizing by decaying factors towards zero, randomly drop-out half of the data points on each training pass (as per drop-out in neural nets); the other factors must then adjust to make up for the missing data. When multiplying the side vectors to produce the modeled matrix, we just half the contribution from each pair of side vectors (R pairs for rank R).

As per drop-out in neural nets, this simulates a massive ensemble with shared weights between the ensemble models.
This page was loaded Mar 21st 2019, 2:00 am GMT.