?

Log in

Surfing the Metaverse
Recent Entries 
27th-Aug-2016 07:13 pm - Generative Models
Somewhat rambling, but wanted to give some context to some planned future posts that will go into more detail (maybe)...

Generative Models
22nd-Aug-2016 02:18 pm - Blockchain Supply/Demand
Bitcoin actually has two supply and demand systems, supply and demand for bitcoin itself which dictates the exchange rate of bitcoin; and supply and demand of space on the blockchain. Supply is mostly fixed, and for much of bitcoin's history was greater than demand, therefore the price you paid for a slot in the blockchain was negligible, either zero or fractions of a penny if you were feeling generous.

Demand now outstrips supply so there is a true market forming and we can look at the transaction fees to gauge demand for blockchain space. The Y-axis here is total US dollars per day spend on bitcoin transaction fees...

Specifically I'm thinking about multiplying a sparse matrix with a vector (or single column matrix if you prefer). Why? Because this is the operation that propagates signals between two sparsely connected layers of a neural network. Or alternatively, let's say your network isn't arranged in neat and tidy layers, e.g. maybe there are cyclic connections or connections leaping over layers; you can represent the entire connectivity graph of such a network with a single sparse matrix.

There are standard data structures for storing sparse matrices, the most common being CSR/CSC.

Side note: I can think of two occasions over the years where I've worked with large sparse matrices while blissfully unaware of any standards for working with them; for the netflix prize data set I used a bit packed form of what apparently is called List of Lists (LIL), and in SharpNEAT I use what apparently is called Coordinate list (COO).

Regarding efficient multiplication there are a few options. Bear in mind that dense matrices are inherently conducive to being number crunched by CPU vector instructions and/or GPUs because the data is in contiguous chunks that is operated on in a homogeneous way, i.e. it fits the pattern of SIMD with no compare/branching required.

Sparse matrices though present a bit of a problem since all of the various compressed formats require some degree of memory access indirection - but it turns out you can still benefit significantly from the widely available vector instructions and parallel compute platforms out there.

For instance, my spangly new Core i7 (Skylake) CPU implements the AVX2 SIMD instruction set, providing 256 bit SIMD registers (4 doubles, or 8 floats) and a useful fused-multiply-add instruction (FMA) - very hand for matrix multiplication.

Some relevant links:

Overview of Intel® MKL Sparse BLAS
Efficient Sparse Matrix-Vector Multiplication on CUDA [1]
Performance Evaluation of Sparse Matrix Multiplication Kernels on Intel Xeon Phi

Intel conveniently provide the Math Kernel Library which provides a number of routines for matrix-vector multiplication and consuming of various sparse formats (CSR, CSC, COO, DIA, SKY, BSR). From [1]:

...codes should try to use Sparse BLAS formats with contiguous memory
patterns (for example BSR, SKY, or DIA) to simplify vectorization & reuse data in
cache as opposed to Sparse formats with possibly non-contiguous memory pattern
like CSR or CSC.


SKY and DIA are specific to triangular factors and diagonals. That leaves BSR (Block sparse row format), what is it and why is it better than CSR? (Given that CSR does store data in continuous blocks). It appears that BSR is an attempt to improve data locality by arranging a sparse matrix into a series of dense sub-matrices. Makes sense.

Sparse BLAS BSR Matrix Storage Format
Block Compressed Row Format (BSR)

I've been able to get the MKL provider in Math.Net working, and the examples provided on dense matrices are reporting up to a 65x speedup over managed C# on double precision, which is far faster than I was expecting. The dense matrix code requires that the data is arranged in memory in one big array in column major order, which in turn was one of the factors guiding the design of Math.Net's data structures.

I was hoping to piggy back on Math.Net but from looking at the source code they don't appear to have plugged in the MKL sparse matrix routines, or even have them in their abstraction layer to allow them to be plugged in. However, I figure I can call the MKL routines directly so long as I can provide matrices in a suitable form.

This is a really nice option for optimising .NET code since it hands over big chunks of data to MKL to use heavily optimised code from Intel to process it, using instructions not accessible from .NET.

There's also OpenBLAS which should have a very similar usage pattern, and perhaps ATLAS, LAPACK and ACML?

On the GPU front there is a CUDA provider for dense matrix multiplication in Math.NET so hopefully I can plug-in to their sparse routines in the same way as MKL. I don't have an Nvidia GPU though, I'm more interested in what options there are utilising the Intel GPUs built into the CPUs. Shame there's no OpenCL provider in Math.NET, but there's enough to go on up top for now.
29th-Mar-2016 09:28 pm - Neural Net Strategy Space. Take 2.
In the context of neuro-evoluion...

I previously floated the idea of defining a position within some strategy space: Evolutionary Strategy Space - General Approach for Evolved Neural Nets.

This is related to NEAT in which there is a population of genomes that are being evolved, and species are defined by applying some kind of distance/similarity metric to the genomes - similar genomes are more likely to mate with each other, and by protecting species we also prevent a strong sub-population from dominating the population.

The distance metric typically and traditionally employed is based on the network structure and weights, therefore we are making room for low fitness novel structures to exist, with the hope that one of those structures will evolve into something good. The strategy space idea is simply the substituting of a network structure based metric with one based on behaviour of the network; why?

In neuro-evolution the networks can quickly aggregate lots of functionally redundant, vestigial structure; think of whole sub-networks with no purpose. We could apply fitness pressure against network size but that isn't without its problems, most notably the creation of yet more local maxima in the fitness space. Whether the network based similarity is horribly flawed or merely sub-optimal is an open question with not very many people rushing in to explore and research. My hunch is that it's horribly flawed (hunch or educated guess?).

My proposed solution is to define a behaviour based space, and my previous post on the topic outlined one possible way of doing that, which on reflection is probably flawed also. The idea was to try and functionally sample the network by applying inputs over the distribution of possible inputs; propagating the inputs through the net to the outputs, where we read the outputs. For the sake of argument if there is one output node then we get a 1D vector of real numbers which we can interpret as a coordinate in some space. The idea falls apart once the number of inputs goes beyond a trivially small number such that the size of the space to sample with any reasonably density is astronomic. We could still take a very sparse subset of samples and hope that is representative of behaviour, but that feels pretty tenuous (but worth considering nonetheless). My second concern (which is related to the first) is that this strategy space, even if not using a sparse sampling, would not be a good match for behavioural space, i.e. it would have the same problem as the genetic space, namely, that large regions of the space would be irrelevant to the fitness function that is being selected.

Perhaps a saner way to progress would be to simply evaluate the fitness function normally (which we have to do anyway) and read the pattern of signals observed at the output nodes. Once again we have a vector, but this time it describes the behaviour of the network, albeit specific to the particular instance of the problem presented. i.e. if you have some critter in a physical world then the outputs controlling the critter's locomotion might be very different if the critter starts at a different location, e.g. nearer to some danger.

Let's consider a drastically simpler problem domain, representing y = sin(x) over some small range of x. The fitness function might sample the network's response to 100 values of x over the interval [0,2*pi]. Not very interesting, but the species are now guaranteed to be defined by similarity in the shape of the sine wave approximation being generated by genomes, and that is the case even where a similar shape is being produced in different ways - which arguably is a good quality of the method, since these strategies might mate to produce strategies that are similar.

Other alternatives:

The Island model. Don't define species; allow a single strong solution to dominate the population and obliterate genetic diversity, but have lots of little populations doing that, all perhaps evolving in different directions. This does have the problem of canalisation, i.e. depending on the fitness space the islands might all get stuck in the same small set of low quality local maxima.

The multi-objective model. Define multiple fitness metrics (e.g. smart, strong, fast) which together define a space that genomes can be positioned within. Genomes can mate with those nearby (so it's a strategy space of sorts), and selection can operate in a way that tries to maintain an even distribution of genomes throughout the space, thus preventing one strong strategy from dominating the entire population.

The broader question is how neuro-evolution and evolutionary computing as a whole fits in against the rise of deep learning and gradient following methods? Is there a place for it anywhere?
13th-Jan-2016 10:03 pm - Oil, Bitcoin and the KLF
It's 1994. The KLF have recently burned one million quid of cash as bundles of fifties. It was one of those brain fart moments where I'm trying to get my head around what this means and I realise I don't have the necessary knowledge to hand. On one hand if you bought say a 1 million pound house and burned it down that would be the destruction of something real - that's a definite real loss. This wasn't that, it was just some bits of paper. That was probably the first time I'd properly thought about what money is and began to realise that there is more to it than the surface understanding that most people have.

Combine all that with a fascination with computer science and you've got the makings of a bitcoin fanatic. What is going on with bitcoin anyway? Last year the price fell and seemed to find a stable level at around $220. Since about October the price has roughly doubled, but in a fairly subdued way that hasn't caused the usual bubble like greed-fuelled buying and associated price spike. It's almost like there's been a shift in fundamentals.

Back in October there was a bit of coverage in the media, perhaps most prominently there was a bitcoin lead/feature article in The Economist, from which followed a number of articles in other established newspapers. Given the small bitcoin user base it's likely this one event has generated enough interest to shift the price. There's also the pending block reward halving, due to occur in July.

Bitcoin money supply is well defined and rigid, so in principle a halving event in the money supply shouldn't have much of an effect, but the reality is that short term effects matter. Miners must expend a certain amount of money to receive the new bitcoin, and that amount is set by competition pressures. As such there is a natural tendency to want to raise at least the amount of money spent mining by spending bitcoins. With the halving the short term price pressures change dramatically. It was looking like a significant portion of the miners would be priced out of the market and would just switch off and sell up, but option B was that the price would rise and keep the existing miners in business. If an increasing number of people begin see bitcoin as a useful store of wealth then this was always a possibility.

Oil markets are experiencing short term dynamics too. With storage around the world at or reaching capacity, even if you wanted to go long on the oil price there's limited opportunity to do so. If you buy it now to sell next year then where are you going to store it? Combine this with Saudi Arabia and company hell bent on pushing the price as low as they can while they can, and you've got sub $30 oil and the very real possibility of going much lower. I wouldn't be surprised if the price of oil fell to $10 at some point in 2016. Inevitably though dynamics will shift and a few years from now world demand will be high, production will fall and the price will rise dramatically. If we had a big storage facility we could even out the price surges, and in fact we have such a store, it's call the oil fields of Saudi Arabia, unfortunately the Saudi's control that store and get to manipulate the price however they want to.

Bitcoin is of course a very different market. Supply may be known and rigid, but bitcoin demand is a very fuzzy notion. There are already lots of monies out there, why would you hold bitcoin instead of one of the traditional stable monies such as gold or Swiss francs? The answer is that the vast majority of investors who have to decide such things don't hold bitcoin and have no intention of ever doing so. Recent price shifts are due to the incredibly small size of the market, such that a few new players can radically change the price.

What scenarios could increase demand for bitcoin? It's use as a payment system is interesting but dubious - we already have payment systems that work perfectly well as far as most people are concerned, it's not a big win for bitcoin in my book. The remittance market might be interesting but in the medium term at least I see it as a small market for bitcoin - and what money is transmitted via bitcoin will, on the whole, be exchanged for local currency fairly quickly.

My prediction is that bitcoin will continue to be a niche money system for the time being. If there is a financial crash of 2007 levels (or greater) then that might give bitcoin a boost, in particular if the dollar system enters into a crisis due to selling from the various vast dollar reserves around the world, then that definitely feeds into the narrative of holding 'safe' or safer moneys - gold, Swiss francs, bitcoin.

The dollar system definitely has some of the tell-tale signs of a system that remains stable for decades, only to fail in one big systemic failure event - as opposed to a slow slide into obscurity and quiet transition to something else. Only the big fail causes the doubt in traditional fiat money that feeds into the high value bitcoin narrative. A slow slide would more likely be into another fiat currency, as has happened on numerous occasions in the past.

On the other hand - total energy and resource consumption of the bitcoin system compared to traditional retail banking could make it an inevitable long term winner. All of those bank properties, infrastructure, staff - it's tempting to see it all as mostly waste compared to 'cryptographic money'. And maybe zero interest rates on saving accounts is a sign that world growth is coming to a halt, leaving wealth preservation as a key goal for 'investors', instead of investment returns.

One last thought. Many systems have appeared to be non-viable before they existed and scaled up e.g. who would buy the first phone, who would they talk to? The reality is that the laws of physics provide a rich source of bootstrap mechanisms. Asteroids falling from the sky, ice ages, hurricanes, tsunamis, Cambrian explosions - black swans. Regarding the phone - some existing organisation such as a big factory or the military buys a phone system, not one phone, problem solved. With other technologies it's merely the whims of human behaviour that drive demand for things that appear non-viable to an economist.

Bitcoin already made one leap beyond the expectation of many - from nothing to something, so why not another leap?

The Klf: (K Foundation) - Burn A Million Quid (Part 1 of 5)
http://www.macrotrends.net/1380/gold-to-oil-ratio-historical-chart
6th-Dec-2015 01:42 pm - On Communication
Imagine trying to describe a style of lamp you've seen that you like. You're pretty sure the style has a name but don't know what it is. You're also pretty sure the person you're talking to know's the name of the style, so would understand instantly if you knew the right word - but you don't.

Natural language works because we all arrive at a common dictionary of words and their meaning. Sometimes a word has slightly different meanings across a population, but on the whole the core vocabulary is unambiguous. And then we have these non-core words, such as that style of lamp.

The mapping from a word to a meaning occurs in our brains probably with something akin to a deep neural network, i.e. there are layers of neurons representing layers of abstraction of meaning/concepts. Hearing or seeing a word causes the low level neurons to fire, invoking a mapping from heard phonemes through to a combination of neurons that represent the concept being conveyed. E.g. if I say 'red triangle' you now have some nodes firing that represent 'red' and 'triangle' (and probably a bunch of other nodes such as 'shape', 'object', 'geometry').

Some of the nodes fire probabilistically - did he mean solid red or the edges only? An abstract triangle shape or the musical instrument? Those uncertainties weren't present in my brain - my neuron firing pattern was turned into a spoken phrase which didn't convey all of the information in my brain about what I was thinking of. The phrase represents some key features only and much of the finer detail is lost due to practical reasons - it's enough information for practical purposes, and describing more detail is usually not relevant to whatever discussion is being had, so there's a balance between practicality and accuracy.

In any case, although we all have neurons that represent all of the key features in our 'world', my neurons are arranged differently to yours. Our high level brain architecture is broadly the same, but the specific wirings are unique to everyone. My 'red' neuron is in a different position physically and logically than everyone else's, and will have slightly different wirings, I might think of the red planet Mars, you might think of a football team.

Imagine though if our brain wiring was identical, that our brains were all clones. This would allow the possibility of transferring exact meaning by examining my neuron firing pattern and activating/exciting the exact same neurons in your brain. Now we can convey exact meaning rapidly and with no ambiguity.

An evolution of this idea would be brains with a core shared wiring pattern to allow the above described efficient communication, but that then allowed new wiring to occur on top of the core wiring. So a sort of hybrid approach where if I learn about some new concept I now have neurons and concepts that aren't in the core wiring and language, and we're back to having to crudely describe and encode concepts with combinations of core concepts that we think (or know) the other person has.

AI agents could package up these non-core 'modules' as optional installations/extensions... "Please wait while I download the module that has the neuron IDs you're referring to... OK carry on".

So much of human civilisation's abilities are based on our ability to not only understand abstractly, but to communicate that understanding with others. A population of AI agents could, in principle, not only have better (deeper and wider) understanding than humans, their communication could be greatly superior too.

Another approach would be a single massive 'super-brain', and hence no communication would be necessary. I tend to think such a brain would ultimately fail, that stable systems need to be made of redundant parts and I think any AI based intelligence would understand that and therefore avoid the single point of failure of one big brain.
21st-Nov-2015 11:45 am - The View from Here
The internet economy is chock full of people trying to make money by solving problems. The pull derives from a double pronged motivation - making money to improve your own life, and solving problems to improve the lives of others. (I'm leaving aside those that make money at the expense of others, the zero sum or negative sum folks of which there are many).

Sometimes it feels like the internet economy is saturated, resulting in lots of loss lead money chasing a dwindling set of problems that remain to be solved. The argument is that that's how a free enterprise economy works, you need people trying out myriad projects to hit on the small proportion that make it big and make a real improvement. i.e. the benefits of the few successes is greater than all of the loss lead effort (see e.g. Western democracies versus Soviet union).

What big problems remain to be solved? Jevons' Paradox ensures a strong supply of new problems for each one we solve. A few current ones that come to mind:

* Atmospheric CO2 and it's myriad consequences.
* Increasing levels of dementia and need for social care.
* Nutrient/quality depletion of agricultural land.
* Resource depletion resulting in increased cost of extraction, loss of some resource, and existential threats in some cases (phosphate?)
* Social cohesion problems due to urbanisation and displacement.
* Wealth imbalance stresses. Causing myriad problems such as underfunding of essential services and social cohesion problems.
* Poor governance (at all levels, national, regional, cities, schools, etc.)

Does the internet economy have a significant role to play in any of these? Or should we be nudging the army of web developers to a change of career? How far can you go to solving these sorts of problems with yet another web or mobile app (yawoma)? Should we be focusing on more traditional skills - training as doctors, builders, carpenters, town planners, nurses.

Poor governance stands out as a root problem for lots of things. How can we tempt the wiser and smarter people into governance roles that they tend to avoid precisely because they are more wise?

Some people defend the UK's first-past-the-post electoral process because it results in strong governments in parliament even when there is no strong support from the electorate (versus e.g. Germany's successive broad coalitions); Do they have a point or should we be tackling this as a high priority root cause of lots of other problems? (same goes for the US electoral system).

I overheard a conversation recently about the alternative vote referendum, where this exact argument was made - i.e. the current system aint great but there's concern about what happens if you change it. Where does this sort of deep conservatism come from? In my view most people are decent and society as a whole gets on pretty well without any government intervention. We aren't all going to descend into a dystopian hell-hole just because government is made up of people with a wider range of views and have to find common ground. I guess it's the 'death by committee' concern/scenario - overall though I believe it's significantly preferably to what we have, but what evidence do we have and how do you make progress on a problem like this?

Maybe I'll just stick to web development.
This page was loaded Aug 31st 2016, 4:00 pm GMT.