?

Log in

No account? Create an account
Surfing the Metaverse
Recent Entries 
19th-Sep-2018 12:53 am - On Walking

There is no sweeter or more refreshing sleep than that which follows a day's walking, along the coast of bluster and sea spray, through the cool woodland valley following a clear deep gliding stream. Few sample this simple pleasure born out of modest efforts, choosing instead to consume and 'relax', and yet perpetually ponder a seemingly unattainable restful slumber. How rich and yet poor. How muddled the modern mind is; always seeking answers to problems that do not or ought not exist, whilst oblivious to genuine irks and their time honoured resolutions. And yet, he who discovers these simple wellpools of sanity is oft mocked for it; what self-reinforcing self-perpetuating insanity is this? Why should these simplest seeds of wellness be guarded against and booby trapped so? Sapiens, oft me thinks not.

11th-Sep-2017 10:29 pm - Steve Keen Crowd Funding

Steve Keen is now primary funded via Patreon (at time of writing he's taking 25% of his salary from Kingston University).

Please consider chipping in via:

https://www.patreon.com/ProfSteveKeen

I have recently carried out some basic research exploring the relative
merits of eight alternative activation functions (+ the logistic
function to make nine total) in a NEAT context.

The motivation for this work was originally to identify activation
functions with increased execution speed relative to the logistic
function, however, the tests highlighted two important qualities of
activation functions other than their speed of execution (1) zero
centering, (2) Unbounded range for positive input values.

More details here:

http://sharpneat.sourceforge.net/research/activation-fn-review/activation-fn-review.html
Just for a laugh / out of curiosity...

Bitcoin Market Capitalisation
This essentially ended up in the bin but I figured I'd summarise where I got to for possible future reference.

Comparison of the Backpropagation Network and a Generative Binary Stochastic Graphical Model


I also tried a log-log scale, but the trend on that scale curves upwards slightly, so I left it as is.

Seems like an 'obvious' pattern anyways.
Just wondering if this has been tried.

I.e. instead of regularizing by decaying factors towards zero, randomly drop-out half of the data points on each training pass (as per drop-out in neural nets); the other factors must then adjust to make up for the missing data. When multiplying the side vectors to produce the modeled matrix, we just half the contribution from each pair of side vectors (R pairs for rank R).

As per drop-out in neural nets, this simulates a massive ensemble with shared weights between the ensemble models.
This page was loaded Dec 19th 2018, 3:50 am GMT.