The intutive ideas here are original and valuable, but the technical follow-through is weak.
The positive side is the idea that you can associate sets of stress scenarios with probabilities using graphical models, and the advantages of doing this.
There are some issues, however.
Rebonato does not seem to be aware of whole bodies of theoretical and practical work on normative probabilistic reasoning in machine learning, e.g., on the theoretical side, Koller and Friedman or Wainright and Jordan to name two, or Heckerman on the practical side. Then there is his idea that you should calibrate your model using linear programming. This is 'algorithmically' sound, but it is not 'principled' - the usual way to calibrate such models is probably by maximum entropy (and max-ent would at least help, if not solve, some of the problems he mentions such as missing marginal probabilities, etc.). True, max-ent is trickier than linear programming, but there is a lot of work on this specific problem that Rebonato does not seem to be aware of. It is interesting that the bibliography is focussed more or less completely on risk and more philosophically oriented Bayesian texts. Serious machine learning texts aren't cited.
I am also slightly (but only slightly) sceptical that specifically Bayesian networks are actually the best way to do what Rebonato proposes - true they are great when you can build them: you get both very good formal behaviour, and the huge informal advantage that you can use them to tell causal stories, but a fair bit of the sort of information you might want to incorporate into a risk model might be missing convincing causal structure - all you may have is correlation relations. If this is so, you are going to have to go with Markov random fields (or even hybrid models), which are intuitively, but not really formally that closely related to Bayesian nets (though the fact that Rebonato emphasies discrete probablility distributions is definitely a help if you were to move in this direction).
Finally, there is the genuinely odd 'roll your own' discusssion about linear programming. I've already said that linear programming may not be the best tool here, but this discussion is revealing for other reasons. Rebonato tells us that he built his own first LP solver in Visual Basic (! - in Excel?) and then reimplmented it in C++. To me, this is like building your own drill as part of a project to build a boat (Tim Severn or Thor Heyerdahl might do it, but the rest of us are better advised to head around the the local building suppliers and pick up a Bosch powerdrill). I mean, if RBS doesn't have a copy of Matlab lying around, then Octave has a good linear programming solver, and if you need more horsepower, then there are systems that will outperform a homebrew LP solver like a Bosch will outperform a bow drill.
In summary, if you were going to use Rebonato's ideas for real, you might want to use something like Koller as your technical foundation, and rely on Rebonato more for big picture and ideas.
P.S. [added 26.8.11] And, right on cue, a brochure has landed in my mailbox to tell me that the man himself is presenting a course on his book just down the road (well, 400km down the A8) from me next month, and, indeed, emphasis seems to have shifted from linear programming (which is not mentioned - but presumably is still deployed) to entropy maximisation.