Justin.jpgHaving the fortune to work as CSO for Talis, an innovative UK software company, in one of the most exciting times for software and the internet; I thought I would share some ideas and insights I am finding exciting at the moment.

I've been away for a while

You may have noticed that I haven’t blogged for over a year.  How lazy is that! Well give me a break, I haven’t exactly been sitting around.

At Talis we spent a lot of 2005 thinking about the next wave of technology in the world of software and the internet. I ran the Talis research group and we had lots of fun offsites and crazy discussion, like you should in research.  But unlike many places, Talis and its management team are focused on putting research and innovation in to action  (cheesy but true). So throughout 2006 we have been putting our money where are mouth is a building something amazing, something practical yet hugely innovative,  something that takes our 36 year old core business forward but aligns Talis with the coming semantic web wave.  And something which I am not going to talk about in detail on this blog :-). To follow the Talis semantic platform you can go to Nodalities , where over the coming  months you can find out what we have been building, tell us what you think about it and most importantly have a go with the APIs yourself!.

I am going to keep my blog focused on the underlining principles of technology innovation, economics and ecosystem. On Nodalities you can see how Talis is putting the principles into action.

In case you are wondering, Nodalities -> Nodes -> network theory, ecosystems, architecture of participation.  You got it.

Posted on Sunday, November 26, 2006 at 07:40AM by Registered CommenterJustin Leavesley | CommentsPost a Comment

The platform is dead. Long live the platform

It seems to me that as we pass into the era of web 2.0, the software platform as we know it today will cease to have significant commercial value.

The principal reason being that the internet and web2.0 is allowing a move from code sharing to instance sharing for software platforms causing the existing network effect mechanism for platforms to fail.

The good news is we can expect new platform models to emerge based on the properties of sharing a single, persistent online instance rather than code sharing and multiple isolated instances (e.g. windows).

Some companies have already hooked into some aspects of this new model. eBay and Amazon as platforms have it, Google as a platform does not. I am of course talking about the architecture of participation becoming the principle network effect mechanisms for web2.0 platforms. That is, if the actions of the users contribute to the shared state of the platform (through which every platform application they may use) in such a way as to enhance the experience of other users,  then their is a strong network effect based upon participation.

It is important to note that the forces enabling this new model are also undoing the previous model.

Here's why (IMHO).
 
Platforms
Over the past 10-15 years, Microsoft demonstrated both the enormous intrinsic and commercial value of software platforms.  We have seen this battle for control of the platform played out over many segments of the software industry and layers in the software stack (Oracle, Syabase at the DB layers, Windows OS/2, IBM Websphere  vs BEA Logic for application servers, smybian etc).

The return on capital invested simply dwarfed other software models and so platform leadership passed into law for many software companies as the one truth strategy for growth. The amazing value creation being principaly driven by two forces; reuse and the network effect.

Reuse: every application built on a platform is saved from having to make the investment to build  features that the platform provides. This massively lowers the cost of production (therefore capital invested) for application developers.

But software libraries and software components do this also, but are not considered platforms. The difference between software platform and software library is the network effect or ecosystem.

Network Effect: Each application built for a particular platform increases the value of having that platform and, by extension, every other application that already uses the platform. So the more applications for a platform, the greater the value of the platform.  So for the owner of the platform, that has a model that can extract commercial value from the platforms massive intrinsic value; the return on capital is a function of the investment that OTHER people have made. Or put another way,  they achieve a return on capital NOT invested by them. Pretty sweet.

But the real questions should be "What causes the network effect in platforms". What is the mechanism by which the investment of application developer A has increased the value of the platform and of application B.

Does that same mechanism hold in the world of web2.0???? My believe is NO it doesn't. And that will have a profound effect on the strategy of software companies over the next 10 years. In fact we are already seeing it.

Traditional cause of the platform network effect

Was the dependency on the users to have purchased and installed the platform in order to use the applications.
Choice of purchase defined which applications you could use, naturally the platform with the better range and quality of software is more valuable (just like in the games console industry).

Web 2.0 removes the need for user purchase of the platform
As functionality moves off the users machine into a standards based cloud, the user choice of application platform effectively disappears.  By definition web 2.o platforms API is web based and implementation neutral.
Consider the Google search APIs. If there is one or 100 applications based on it, the value of the platform is not much enhanced, those applications do not add anything to each other, no network effect. From an ecosystem point of view Web2.0 APIs are much more like software libraries than platforms.

Web 2.0 platform network effect

But web2.0 platforms have a new trick that traditional platforms don't have. They can easily present one shared instance i.e. state to all the different users of all the different applications. This allows the actions of one user using application A of the platform to enhance the experience of another user using application B of the platform. This is the architecture of participation. It is easy to see how both eBay and Amazon increase the power of their content based network effect through open access APIs. It is also easy to see how this doesn't work for Google, the end user of a search app typically can't affect the shared state of the platform.

Open Source Network Effect
Developer still need traditional software platforms though.
So web2.0 platforms allow sharing of the state of the platform. Where as traditional platforms allowed read only sharing of the code.
There is a way that traditional platforms can drive a network effect by allowing participation in the shared code of the platform. They can let users contribute to the code. This can immediately drive a whole new network effect which hugely increases the intrinsic value of the platform. Unfortunately for the existing platform vendors, nobody wants to submit code that somebody else will make money off.  So this can only be done through open source. Linux is hugely valuable, but nobody can make $billions of its commercial sale, at least not directly.
Interestingly, as more open source code is created, it becomes easier to remix the code and create yet more open source software. The more general the software the more valuable for it to have an open source incarnation i.e. platforms are the natural place for opensource to target as we have seen with Linux, MySQL, JBOSS etc.

So for all the reasons above, I am pretty sure that as web2.0 progresses, we will see the rise of a different type of platform and the existing platform players will have a very hard time in holdings onto any serious returns.

Long live the platform of participation. 

Posted on Friday, August 12, 2005 at 09:08AM by Registered CommenterJustin Leavesley in , | CommentsPost a Comment

Will the real Semantic Web please stand up

We live in an amazing and unique time.  Most of you reading this blog were alive at the birth of the global computer, around 15 years ago. In that time the computer has never been switched off, never been rebooted and has grown to an almost inconceivable size and complexity. The shear storage and processing power is almost impossible to calculate. The computer is fed information and programmed by the actions of around a billion users, night and day, evolving at an incredible speed.  For example, in the last two years, over 14 million blogs alone have appeared, seemingly with no effort or investment!

But there is something else going on other than computing on a grand scale. A new type of approach to computing is arising, one which fundamentally changes the relationship between the user and the computer.  I am talking about a new approach which is based on tapping into the collaborative effort of millions of users to programme software through the everyday actions of the users. The new programs are effectively learning systems that extract training and feedback from users actions on an unprecedented scale. Fuzziness, statistics and learning over programmatic logic.
The Google spell checker is a great example of this. Google could have sat a bunch of programmers down and coded a spell checker using a dictionary and lots of rules. Doing this in every language under the sun and keeping it current as new words come into being (e.g. blogging) would have been a great effort. Instead, Google uses the actions of the users to programme the spell checking, extracting patterns of behaviour from users retyping misspelled words and feedback on when the user accepts a suggested spelling correction.  Amazon's people who bought this book also bought these system is a more limited example.
Built on participation between the users and the system, the result is what you might call collaborative intelligence.
It is an emergent rather than programmed.
It is interesting to note that this is also the same transition that artificial intelligence went through. It became clear that predicate logic based solutions did not scale well and the field turned to fuzzy logic, statistics and neural networks where systems required training rather than programming.

The other important quality of this approach is scalability. Implicitly this scales, in fact it thrives on scale.
Traditional programmatic approaches, essentially based on logic, have a harder time scaling.

Considering that it really is only in the last few years where the hardware costs and online community size has enabled experimentation at scale, I am very excited about what the next 10 years will bring in this direction.

So this brings me to the title of this blog. It seems to me that humans are very good at semantics and that systems that are based on human computer collaboration (i.e the emergent properties of large numbers of users) will be very important in semantic based systems. You could consider del.icio.us and Flickr and the massive rise of tagging and microformats to be very early examples.  If the collaborative approach of del.icio.us could be synthesised with more sophisticated semantic methods such as RDF then we might really be cooking with gas.

So I conceive of the Semantic Web including applications built as collaborative emergent systems. 
Here in lies my problem. The Semantic Web as defined by Tim Berners-Lee's and expressed in his paper on the design issues for the Semantic Web, expressly excludes any type of fuzzy system from being a Semantic Web application (see exert below and comment). This is because he requires applications to be logically provable and guaranteed so that first order predicate calculus (predicate logic)  is the only logic that the Semantic Web admits. The example TBL gives is of a banking application needing to be guaranteed.
I have two main issues with this:

1) Why exclude the Semantic Web from the exciting possibilities of fuzzy and statistical approaches to  semantic systems. Can't both be included, a banking application just requires a stricter criteria on statements it can operate on. Applications don't need to be guaranteed to be useful (although I admit Banking applications do!!).

2) Will this massively scale? What gives us reason to believe it will? FOPC based systems have proven difficult to scale in several fields so far. TBL admits that the Semantic Web approach is not very different from previous approaches that did fail to scale. The basic point is that FOPC based systems cannot cope with inconsistency (as TBL points out) , as you scale, keeping consistency in practice becomes harder. 

So, what will the semantic web be like. I guess in time the real semantic web will stand up.

The rest of the blog looks at TBLs semantic web design paper in more detail and may not be of great interest to most readers

First of all, thanks Rick and Ian for persevering with all my questions.

Fuzzy or not has been the main theme behind all my SW blogs to date. Tim Berners-Lee's is quite clear - Not.
I just don't get why not, certainty is just a special case of fuzziness, why can't we include both?

We are back again to where I started perfect or sloppy rdf shirky and wittgenstein.html which was based on Tim Berners-Lee's paper  you mentioned Rick

This quote has almost the entire point I am trying to make in it. I'll take a few sentences at a time and explain what they mean to me.

"The FOPC inference model is extremely intolerant of inconsistency [i.e. P(x) & NOT (P(X)) -> Q], the semantic web has to tolerate many kinds of inconsistency.

Toleration of inconsistecy can only be done by fuzzy systems. We need a semantic web which will provide guarantees, and about which one can reson with logic. (A fuzzy system might be good for finding a proof -- but then it should be able to go back and justify each deduction logically to produce a proof in the unifying HOL language which anyone can check) Any real SW system will work not by believing anything it reads on the web but by checking the source of any information. (I wish people would learn to do this on the Web as it is!). So in fact, a rule will allow a system to infer things only from statements of a particular form signed by particular keys. Within such a system, an inconsistency is a serious problem, not something to worked around. If my bank says my bank balance is $100 and my computer says it is $200, then we need to figure out the problem. Same with launching missiles, IMHO. The semantic web model is that a URI dereferences to a document which parses to a directed labeled graph of statements. The statements can have URIs as prameters, so they can may statements about documents and about other statements. So you can express trust and reason about it, and limit your information to trusted consistent data."

1)Toleration of inconsistecy can only be done by fuzzy systems. We need a semantic web which will provide guarantees, and about which one can reson with logic.
Here TBL specifically excludes fuzzy approaches from the semantic web. By extension other statistical and learning based approaches to knowledge systems are also excluded. The reason given is that guaranteed and provable is an absolute requirement. If your app is not guaranteed it is not a semantic web app. This immediately limits the concept of the semantic web to what is computable by logic rather than what is usefully computable by any means.
Sure banking applications do need to be guaranteed, so they should use rules that only operate on provable, trusted statements. But there are loads of application of semantics where usefulness rather guarantees is the goal.
I do not see why it need be one or the other, you just have stricter requirements for proof in a banking app than a fuzzy app. See Semantic Superpositions for thoughts on a semantic web that included fuzziness.

Considering FOPC approaches have been largely discredited in the field of AI and replaced by fuzziness, this would seem a risky limitation to impose.

2)Any real SW system will work not by believing anything it reads on the web but by checking the source of any information. (I wish people would learn to do this on the Web as it is!). So in fact, a rule will allow a system to infer things only from statements of a particular form signed by particular keys. Within such a system, an inconsistency is a serious problem, not something to worked around
The necessary consequence of 1). is, as TBL states here, that in any SW system an inconsistency is a serious problem. Because of the guaranteed requirement, it isn't even enough that the data is accidentally consistent, it must be logical consistent i.e you will only encounter an inconsistency if there is a programming fault or corruption, standard user action should not be a factor. That is, the statements a SW app is using must be guaranteed consistent.

This means semantic web applications are quite fragile, the larger the scale the harder to maintain consistency in practice, whereas statistical approaches work the opposite way, the larger the scale the better they work. 

Any SW application therefore requires there to be only one version of the truth, i.e. it can only work with consistent statements.  However, there are many things we wish to describe where there is no one version of the truth.
Here is the rub; this is a result only of the requirement to be logically guaranteed. There are many computational approaches that can operate on inconsistent statements, fuzzy system, statistical approaches, neural networks. These can mine huge value out of those statements. None of that is possible with Semantic Web applications (as defined above), all those rich patterns must be collapsed into a single consistent version of the truth before the application can operate on it. The Google approach to spell checking is a great example of using such statistical approaches rather than logic to programme the spell checker.


The requirement for consistency in practice is very tough because humans are in the loop of data. Here we run straight into the fact RDF is designed to allow multiple agencies to make statements about the same thing. Even if two agencies are using the same URI and the same definition of a particular property, when users come to enter data and have to make classification decisions based on that URI description, the users will not classify the same thing in exactly the same way. The URI is not an authority, it cannot guarantee consistency between agencies e.g. you cannot show two copies of Harry Potter to the Editions URI and ask it if they are different editions or the same. People make that call according to there own interpretation of the description of the Concept. 
Reversing that around, if you receive two statements about the number of Editions that exist for a Harry Potter book and one states 1 edition and the other states 2 editions. The only way to arbitrate between them is to get the actual real books out and examine them against your own interpretation of the URI definition.
What I have described above is the fact that single authorities only make sense for certain classes of problem. i.e. where there is only one version of the truth. They make perfect sense for bank accounts, in the library domain, each library has an equal right to make statements about a book whilst cataloguing it so there is no concept of one authority. Similarly, who is the authority that decides if a photo is a smiling face or a sad face.

The result of all that is that to guarantee consistency, for a particular SW system, there can be only one authority for statements or else inconsistency will arise from user actions. This allows any conflict to be resolved by asking the authority to decree. Note also, that it is not good enough that statements don't conflict with published statements from the authority, the authority may not have published all possible statements, statements must actually agree with statements made by the authority.

TBL also says

"

A semantic web is not an exact rerun of a previous failed experiment

Other concerns at this point are raised about the relationship to Knowledge representation systems: has this not been tried before with projects such as KIFand cyc? The answer is yes, it has, more or less, and such systems have been developed a long way. They should feed the semantic Web with design experience and the Semantic Web may provide a source of data for reasoning engines developed in similar projects.

Many KR systems had a problem merging or interrelating two separate knowledge bases, as the model was that any concept had one and only one place in a tree of knowledge. They therefore did not scale, or pass the test of independent invention. [see evolvability]. The RDF world, by contrast is designed for this in mind, and the retrospective documentation of relationships between originally independent concepts."

3) They therefore did not scale, or pass the test of independent invention
For any SW app to have guaranteed consistency,  independent invention is not possible because you would need to force all statements from two separate agencies to be the same, and that means they are not independent at all i.e. one agency is not free to act independently of another because that will cause inconsistency.
It then rather seems that for all intents and purposes that independent descriptions are excluded from any particular SW app by the requirement to achieve consistency, exactly how is does a semantic web app then differ from those failed experiments?

 

To sum up, I can't understand why the semantic web (at least as described by TBL) should exclude any approach based on fuzziness, statistics and inconsistency. The requirement of consistency, when taking statements from different systems, cannot be met because humans cannot be made to all agree on classification statements(what ever training or manuals you give them) and therefore will make inconsistent statements through their use of the computer systems. Whilst RDF is free to describe all the variety in the world, the Semantic web application can only make use of the tiniest portion of it.

From some of the comments I have received, clearly some people agree with the TBL vision and others don't.
In the end I guess it doesn't really matter. People will use RDF to do cool things and call them semantic apps even if they don't accord to TBL FOPC requirement for Proof.  I do think it is at the basis of a lot of sceptism from outside the Semantic Web community though, given the spectacular failure of FOPC to scale in previous attempts by the AI and KR communities. It might be an idea to really present this stuff clearly to either face up to this criticism or prove it false.

I personally have had enough of this topic now and am going to think about other things for a while :-)

Thanks to all those who have contributed to the discussion. I'm sure there are lots of people out there who will disagree with things I have said above.  Just goes to show how hard it is to get people to share the same concept of things, the world is fuzzy after all.

Posted on Wednesday, August 10, 2005 at 06:27AM by Registered CommenterJustin Leavesley in | Comments5 Comments

Semantic SuperPositions

So I have been thinking at lot about the inconsistency in RDF graphs that must arise when ever more than one authority has equal right to make statements about the same thing. The foundation of the problem is that there is no A Priori way to choose one set of statements over another, they are both equally true. And this will happen a lot when you take data from multiple systems because you cannot get users to all act in the same way when making classification decisions, even by giving them detailed training and manuals.

I wondered what it might look like if we generalised the approach to computing with the semantic web so that FOPC(programmatic logic) was a special case, rather than the only case.
That is, we accept inconsistency as a fundamental property. This would allow the semantic web to embrace the bottom up highly inconsistent folksonomies and strict banking applications in the same logic, proof and trust framework.

In traditional software applications there is only one authority; only one version of the truth is allowed and so inconsistency as a fundamental issue doesn't arise. In the post Schrodinger's Web and comments I tried to give an example of how easily this inconsistency arises even in a simple system as soon as you allow multiple authorities to have equal right to make statements, even when the software systems are identical. This is because however you train the users, when presented with choices to make in day to day operations no two users of separate systems will make exactly the same decisions about classification of the same thing.

I will borrowed the concept of Superposition from quantum mechanics. A super position is just the sum of all the possible states, in this case that would be a collection of inconsistent statements along with their source, call it a superposition node.

When graphs are joined and inconsistencies occur a superposition node could be created in the graph.
Superposition nodes are different, FOPC (programmatic logic) cannot be applied directly to them (it would fail on the inconsistencies). Instead statistical operators can be applied to merge or transform (but not collapse) different super positions, or instead a collapsing function could operate on it to choose the wining statement and collapse the super position into an ordinary node. If your application did wish to operate on the node with programmatic logic then a collapsing function must be supplied. The collapsing function uses some rule appropriate to the application to collapse the super position into a consistent statement that is then safely operated on by FOPC.

If a superposition node occurred in a portion of your graph that you didn't need to operate on programmatically then it need never be collapsed.

Single Authority Example: Banking
For example, a banking application would use a collapsing function based on which statements also contained trusted proof that they had the authority of the banking system i.e. in this case there is only one true authority so it is straightforward to collapse the superposition and allow the application logic to operate. Note the application logic should not have to know how to collapse the superposition, it just requires consistent trusted data.

Multiple Authority Example: Library System Catalogues
In the previous example I gave where multiple libraries catalogues the same work with different number of editions, when these descriptions are brought together, a superposition node occurs in the graph at the editions point. So a Work is related to a superposition of editions. Here the collapsing function could be statistical, choosing which ever was the most popular statement. The rest of the application then works off that choice.
But if you were creating an application for a university research, maybe you would create a collapsing function that gave extra weight to the statements of other research libraries over public libraries.

Statistical Operators
For some applications the major work will be in statistical operations on superposition rather than any programmatic logic. The Google spell checker springs to mind. This effectively uses statistical methods based on what many people have typed and then retyped when getting no results combined with the people peoples reactions to the offered spelling corrections.
The real work is in the manipulation of the superposition rather than any procedural logic based on a collapse.

A few comments on the previous posting have suggested we need to change the way users behave. I think that is unrealistic. I would rather we created systems that supported the way users do behave (and that alas is inconsistently).

So that’s my thoughts on inconsistency and the semantic web.
Trying to avoid it seems so limiting to me, and I don't see why we should.

Posted on Tuesday, August 9, 2005 at 01:49PM by Registered CommenterJustin Leavesley | Comments2 Comments | References1 Reference

Schrödinger's Web

Looking back at my post Perfect or Sloppy- RDF,Shirky and Wittgenstein and Danny's detailed response, wittgensteins-laptop  (sorry you lost the original post Danny), a couple of things are clear. I didn't do a good job of explaining what I think the issue is and it was a bit them and us(not the intention).

Ian also made a good point. I should clarify that the issues that were bugging me are not with RDF itself but with the layers further up the Semantic Web Stack specifically the logic and proof layer built on top of RDF.

I would like to describe how I understand the proposed Semantic Web Stack and ask the community how certain questions have been covered off. It may be that I misunderstand the vision or that the questions I have  have already been covered off.

As I understand it, the RDF and Onotology layers allow graphs of statements to be made and linked together. Multiple descriptions of a concept can be made and RDF allows inconsistency. The query level allows portions of graphs to be selected or joined together and the logic level allows knew knowledge to be inferred from the statements and questions to be answered using the mass of RDF statements. I understand this Logic to be first order predicate calculus(FOPC)?.

My concern is that the logic layer is very intolerant of inconsistency or error. From what I have been able to find it seems the proposed answer to this is to limit the scope of the logic to trusted consistent statements or user arbitration of conflicts will be required. This is the root of my concern, I cannot see how this is possible. Inconsistency is not just generated at the system logic or schema level it is deeper. It is the necessary result of allowing multiple descriptions of the same thing.

Inconsistency will always arise when ever humans have to make classification choices. This was one of the points in my previous post.

Danny was quite right to point out that most software today requires consistency. We all know the length programmers go to to ensure consistency and this is because programmatic methods are based on predicate logic. If a program enters an inconsistent state, usually that thread of execution must end. If the inconsistency is in persistent data you are in real trouble because restarting won't fix the problem.

Compilers enforce the consistency of the code but the data in the system must also be consistent if programmatic LOGIC is to be based on it.

Two principal methods are used to achieve this:

1. Limit to one description of an entity I.e no competing descriptions e.g one record per entity ID.

2. Fields marked as non programmatic e.g. Text descriptions. The contents of these fields will not be used by the program logic, they are for human use only.

With this approach any uncertainty in programmatic fields cannot generate inconsistency, principally because there is only one version of the truth i.e. statements are orthogonal.

Now contrast this with the semantic web where by definition you will be working with descriptions from many systems. Inconsistency will a natural feature not an error condition.

Note the fundamental nature of the inconsistency, it is not a property of the different systems, two identical systems will still yield inconsistency, because it is a function of how people use a system, not the system itself .

I confused the previous example by suggesting two different systems with slightly different schemas.

This time consider two identical library systems, both which have a schema with the concept of editions of a work , both of which are defined by the same RDF URI. In one system the librarian considers an edition to be when ever there is difference such as the two different covers for the same Harry Potter and catalogues accordingly. The librarian using the other system thinks it is a different edition only if the contents are different.

Now, taking descriptions from both systems you will get an inconsistency. Does the work Harry Potter have 1 or 2 editions.

This is not something you can fix by giving a different URI to the editions concept for each system because the inconsistency is result of the classification decision made by that person for that record in that system at that time i.e. it is not systematic. The result is that inconsistency will arise in an unpredictable way even between identical software systems with identical schemas. (it is one reason why integration of different systems still remains a pain even if you use RDF).

This inconsistency isn't a problem in its own right. But if layers of predicate logic are working off this data then it will become unstable very quickly.

My current understanding is that the SW community are suggesting that either inconsistency is avoided (how as it is a fundamental result of allowing multiple descriptions of the same thing) or that the system should ask a user at that point to arbitrate (on what basis should they choose one over the other? They are both right).

It strikes me that if inconsistency is fundamental then it should be treated as such, not something to be avoided.

Isn't the SW approach today, based on predicate logic, simply using the wrong maths? Just as the AI community was before it embraced fuzzyness, uncertainty and statistics? Or the classical physics community before quantum mechanics?

That transition saw AI moving from "programming" AI systems with rules and logic to creating learning systems that needed training.
It seems to me that the internet has 1 billion users capable of training it. We see examples of this in things like Google spell checking, which, rather than creating a traditional dictionary is based upon what people type and then retype when they can no results. When a spelling suggestion is given, if the user chooses it this provides further feedback or training as to what is useful spelling help and not. This turns out to work much better than the programmed approach. Other examples that spring to mind in del.icio.us and Flickr.

Realising that a work both has 1 and 2 editions at the same time seems to me to be exactly the position classical physics found it self in at the birth of quantum physics. The maths of classical physics could not cope with particles being at several locations at the same time. Neither could the classical physicist!

A new maths was required. One based upon uncertainly and probability. This maths is very well understood and forms the basis of solid state physics upon which electronic engineering is based upon which of coarse the computer is based!

So I guess my question is this: Is the logic layer intended to be FOPC and if so why. Who is ensuring that the SW community isn't falling into the same traps the AI community did? What can be learned from the AI community?
What is the problem of using probability based maths, it works for physics!

Maybe all of these have good answers. If so I wasn't able to easily find them. Or my understanding of the logic layers is wrong?

Please let me know.

 

 

Posted on Sunday, August 7, 2005 at 10:53AM by Registered CommenterJustin Leavesley in , , | Comments18 Comments | References1 Reference