Will cloud computing turn ERP on its head?
Is it possible that the traditional ERP vendors may lose their dominant positions in the mid and large size enterprises because of cloud computing and what it enables – notwithstanding their own efforts to exploit the cloud.
Seems to me that the cloud enables business managers to demand a different experience of implementing information solutions to support their businesses. There is an emerging demand for simpler, faster, cheaper implementations – potentially not built on one integrated solution from one ERP vendor. And this may work well for the implementation partners also. Ultimately they may be required to work off a reduced margin – but this may be achieved for significantly reduced investment and reduced risk of failure.
Excellent piece recently in CIO dealing with the future of ERP. The piece does not purport to have all the answers – but certainly speaks to the challenges being faced by traditional vendors and the opportunities for those with solutions built for the cloud.
UK government pushes ahead to support linked open data initiative
data dot gov dot uk is about to become a reality. Tim Berners Lee and Nigel Schadbolt cover this off in their article, ‘Put in your postcode‘, out comes the data, in The Times 18/11/09.
The UK government is moving forward on a similar basis to the US government – in making public data available to the public.
Curious to see how far advanced we are wrt implementing something similar in Ireland – in the context of our knowledge society and smart economy. Must make sense to make this type of information available – as argued by Tim Berners Lee in the referenced article.
Semantics can be used to optimise online advertising efforts
Three different examples recently reported of use of semantic web technologies to improve online advertising efforts.
OpenAmplify is a web service developed by Hapax that brings human understanding to content. Using patented Natural Language Processing technology, OpenAmplify reads and understands every word used in text. It identifies the significant topics, brands, people, perspectives, emotions, actions and timescales and presents the findings in an actionable XML structure.
NEW YORK – ad pepper media, the international online advertising network and semantic advertising technology solutions provider, launched the SiteScreen for Agencies platform, enabling advertising agencies to apply its ground-breaking SiteScreen semantic brand protection technology across their entire range of online media buys to effectively prevent ad misplacements.
Read more: http://www.adoperationsonline.com/2009/11/12/ad-pepper-media-launches-sitescreen-for-agencies/#ixzz0XL2vwtcR
In Italy, Quattroruote is a leading online magazine for car aficionados and buyers, with its reputation built on testing and evaluating models and its own blue book-like price estimates for vehicles. Now it’s a leading-edge user of semantic web technology, too.
It has deployed Expert System’s Cogito semantic solution to help add value to user searches for used cars in its portal to the world of classified car sales.
Do not confuse much of current semantic web and human intelligence
There is a great deal written about web 3.0/ semantic web in terms of knowledge and intelligence. Much of it relates to computers being able to process data published on the web and ‘understand’ it – either via Natural Language Processing type solutions or through markups such as Resource Definition Framework (RDF).
piece of research being conducted by IBM reminds us of the competition – the human brain.
For now I see the real benefit of the semantic web being to give me some assistance in terms of processing the vast amount of data which is available on the web (and within enterprises – under linked open data initiative). For instance, if in going to a meeting to discuss evolving health & safety issues in the construction industry in Australia, I have a piece of software which can filter/find/ summarise much of the information and data in the public domain then my contribution to the meeting may be more valuable (or my preparation time may be accelerated). Again, within the context of semantic web, my profile – if I have an interest in such a field – should result in my being prompted with relevant information. This ties in with Kevin Kelly’s dictum, ‘No personalisation without transparency’.
explaining the semantic web
Find myself being asked more regularly to explain ‘the semantic web’. I think it’s a combination of a growing awareness in the business community of the semantic web and a greater focus on this topic by myself.
Read a piece this morning on the hypios web site – a web 2.0 based problem solving site. In the first page of this essay the author offers an excellent introduction to the semantic web (and the requirement for a semantic web).
The only reservation I would have would be the ‘plea’ to business to make more data available publicly as linked open data. I agree with the sentiment – but not sure that business on such sentiment.
Understanding the basics of semiotics and semantics
Excellent presentation (to undergrads I presume) outlining background to semiotics and semantics.
Great start – asks the participants in 15 seconds to define ‘forward’.
Works through the basics of symbols, icons and indices. This in turn leads on to the importance of context (more important for symbols e.g. language than for icons).
Follows on from this to explain the need for rules and agreed terminologies – leading to Ontologies.
Making linked open data sound more complex than it needs to
I think Paul Walk’s analysis in his recent posting is clear and to the point.
To some extent I think Tim Berners Lee may almost be a victim of his own success. Seems to me his initial guidance to government (and others) was to get on with making the data available (at that time he was not stressing the need to provide the data in RDF format). Now that data.Gov has provided data TBL and others are understandably pushing that the data be in RDF format – to enable linking of the data.
Obviously we,promoting things semantic, want the data to be published and easily linkable. But sometimes, as per Paul’s posting, I think we make it all look a little more confusing than necessary, by ‘mashing’ (apols for pun) the terminology.
Teething problems with some linked open data initiatives
I referenced recently Tim Berners Lee’s encouragement to everyone looking to publish linked open data to use the Resource Definition Framework. I also referenced in this blog recent work completed by the New York Times in this field. The New York Times initiative has attracted an amount of comment in the technical community identifying the teething issues/ errors in this data as published.
Stefan Mazzocchi’s recent post, Data Smoke and Mirrors, speaks to some of the issues associated with publishing lots of linked data using RDF. Stefan has reviewed a triplification of all the data from data.gov – and has been left somewhat bemused. The posting itself provides some examples.
The point here is that we want to see the data published, we want to see the standards used – but it’s far from simple and publishing for the sake of publishing or triplifying for the sake of triplifying may be self defeating. As a community we need to focus on quality and the end user of the data.
Newspapers should look to publish data using RDF
New York Times announced last week that it will provide data marked up using RDF (Resource Definition Framework).
Why is this important?
This makes the data more useful. You can now cross reference/ correlate the NY Times information with other information available on the web e.g. DBPedia (The RDF format of Wikipedia). You can also develop applications which can access/ process/ interpret the NY Times data – because it is provided in RDF format.
Interesting development – and makes sense of the Linked Open Data initiative. The NY Times is embracing RDF – to some extent it is giving away its data, but on the other hand its own data is far more valuable because it can easily be combined with other (RDF’d) data.
Quite a challenge to all organisations – especially those generating significant content – who are failing to have their data leveraged properly because it sits in its own silo.
There are challenges in deployment of the semantic web – including provision of data marked up using RDF.
Nice piece by Michael Cataldo outlining potential benefits of semantic web – in terms of making it easier to access data on the web and cross reference/ correlate the data. Michael makes the point that fuller adoption of semantic web principles at an earlier date may have assisted in preventing some of the elements of the subprime crisis.
I am very much a fan of the semantic web and indeed of the movement towards linked open data. However it is interesting to read reports of Tim Berners Lee’s own frustrations wrt advances in linked open data e.g. the fact that data is being published on data.gov in non RDF formats (thereby limiting the ability of people to browse from this data to other RDF marked up data).
I think Michael Cataldo, in looking to demonstrate potential benefits of semantic web, may be stretching things a little far wrt the subprime crisis – were people motivated to make the data easily understood or was obfuscation not part of the intent?