Tackling human intelligence

I was drawn to the semantic web and semantic technologies because of the potential benefit to each of us.  There is no debate about the growing volumes of data – be that in our personal digitally recorded lives, our business lives or more generally. on the World Wide Web.  So tools/ solutions which assist in processing/analysing or making sense of some of this data seem attractive to me. Part of the challenge is trying to have software do some of the heavy lifting.  Much of the data which is potentially subject to heavy lifting has originally been published for human consumption and is not ideally formatted for consumption by software.

So semantics has its place.  Can we deal with the ambiguity in the data?  In Australia a reference to football may mean ‘Australian Rules’ football, in England may mean ‘soccer’,  in Ireland my mean ‘Gaelic football’.  So if I have a piece of software doing some heavy lifting across the web to analyse performances of ‘football full backs’ during on the weekend of the third month in December 2009 my software may be confused – may mix up different codes, etc.  I may be able to define my search/query in great detail but perhaps the data as originally published does not provide the required clarity – risking ‘a question of semantics’.

I was quite taken by the piece ‘Paul Allen: the singularity is not near’ published this week in MIT’s Technology Review.  Ray Kurzweil’s thoughts on computer systems bypassing human intelligence in the near future are well known and documented.   Paul Allen and Mark Greaves argue strongly that Kurzweil is being over optimistic (depending on your viewpoint).  They include a number of examples from neuroscience and artificial intelligence arguing that we will be a long way sort of Kurzweil’s vision in 2045 – Kurzweil’s date.

Much of this took me back to the simplicity of what we are trying to achieve in semantics/ semantic web – the heavy lifting.  And it’s not proving very simple.  Yes, the search engines and various semantic tools are presenting improved, cross referenced, even multi-correlated data – but we have an awfully long way to go.




Peace time/ war time – need to tackle lots of data

How to process the ever increasing volumes of data – in peace time and war time

Interesting piece in the Economist, under Artificial Intelligence, dealing with different ways of processing increasing amounts of data in war time or disaster situations such as earth quakes.  Author reminds us of the sheer volume and depth of information being gathered through sensors – and the requirement to process this using technology (because of the volumes).  This data may be through the use drones etc in a military situation or through crowd sourcing in a disaster situation.

Depending on one’s perspective this may be seen as a positive or yet another examples of the surveillance society.

Enhanced by Zemanta