Recent Changes - Search:



edit SideBar


The future, possibly never-to-be developed RuleForge site will be a combination of:

  • Content about business rules and rule engines, based on old content.
  • A Semantic web, RDF style CMS supporting the RuleForge content.
  • A small rule/inference engine capable of running simple examples of rule sets.

Subjects will be:

  • Rule Engines and Rule-Based Systems
  • Knowledge Engineering
  • Business Rules
  • Semantic Web, more specifically as knowledge representation

1.  Primary Objectives

The content management system for the RuleForge site to be developed will be new functionality from the semwiki part of the project. The key subject will be rule engines and rule-based systems, including to some extent workflow engines. There should be just enough machinery beneath the hood to run simple examples of rule sets.

Note that most business rule and rule engine sites concentrate on technical capabilities, that is code and complex algorithms. I think that there is so much more to be said about the formal, logical structure of rule bases themselves than just rule engines and computer solutions. For example, I would to have a repository of several sets of customer order validation rules and associated workflows, based on business design patterns and an analytic framework for real-world business problems.

Of course, the basic content will still cover the traditional technologies and components used to build rule-based systems, that is specific components to implement complex rule-based workflow engines. But implemented in a simple way, of course.

Note: I've never heard a business user request a 'complicated' feature for their system - they always say something like "I want to integrate order entry, manufacturing and shipping ... but keep it simple". :-)

Much of the content for the Rule Forge site will come from:

2.  Secondary Objectives

  • Create new content types while maintaining simple and direct path/content type mapping.
  • Use plug-ins and a component architecture as much as possible, keeping the upgrade path in mind.
  • Run more complex rule/inference engines, extending event and workflow mechanism for rules.
  • Provide a high level of host/localhost integration with peer-to-peer capabilities, for desktop tie-back.

3.  Extended Functions

RuleForge semantic functions extending Semantastic are:

  • A Simple Inference Engine for "Semantic Tagging"
    • at least some capability for relationships between tags
  • A Simple Inference Engine for Running Rule Base Demos
    • some classic "customer order", family tree and diagnostic examples
  • A Full-Featured Search Engine
    • probably external to RuleForge app,
    • include objects from all realms, including weblog: ( likely to be too big for wiki search )
    • post processing for discovered external links in a master links DB
  • Type and Form Templates
    • an engine to drive Semantic Mediawiki type templates, maybe

Note that some of these requirements may overlap those of the Semantastic project. In fact, rule engine type functions may be required for 'truth maintenance' in a highly complex, non-relational RDF type of table ( with 3-way -> 5-way relationships ). It may not be feasible in practice without an advanced truth maintenance engine of some sort.

4.  Taming the Trac Software Management ... Whatever

The Rule Forge part of this project had a previous incarnation as ( now defunct ) running on a Trac Wiki. It was bit of a challenge.

Trac is an interesting animal. It is like several animals successfully patched together - it does a couple of different things very well at the same time, a claim which few applications can boast of.

Trac describes itself as "an enhanced wiki and issue tracking system for software development projects" and an an "interface to Subversion (and other version control systems)". It also has a good plugin system and reporting subsystems.

  • Wiki
    • Creole Markup, mostly standard
    • Trac Links, realm: preface similar to a namespace
  • Issue Tracking
    • Tickets, Workflow and Scheduling
    • Report Subsystem
  • Interface to Version Control
    • Mostly Subversion, some GIT
    • Source Browser, Changesets
  • Plugins

Again, Trac does all these things well. So why do I say, "taming Trac". Maybe taming is the wrong word, the phrase 're-purposing Trac' is better. That doesn't make the task any easier.

For example, styles are essentially hard coded into the Trac system - developers aren't supposed to care about looks, I guess. In principle, the site environment overrides the core content generation templates, but it's not as clean as a separate theme library with its own directory, etc. There is a Trac Themes plugin, but it has the feeling of something tacked on 'after the fact'.

Unfortunately, I speak from hard experience. The task of re-theming Trac turned out to be a major sub-project I had not anticipated.

So, why bother with it at all ? There other fish in the Python-powered wiki scene, maybe I should find them and move on with my life. MoinMoin has got good stuff in it too.

Well ... there is another thing that Trac does well - Trac Links. They deserve special attention because TracLinks implement flexible 'realm:' type prefixes, such as "person:Bob". This type of data structure is notoriously difficult to deal with in a many-to-many table, but realm: prefixes can also enable 'multiway' RDF relational semantics very directly. In other words, it allows dynamic relational expressions such as <person:bob | address: | '2211 First Street'>, in an object/relation/value pattern.

So, a basic implementation of semantic extensions already exists within the 'namespace' feature of the Trac database. And, by the way, Mediawiki's Semantic Extension requires a largish amount of code just to deal with functionality that is essentially free in Trac. Very tempting ...

5.  A Brutal Solution

One potential approach ( the 'brutalist' approach) to the themes issue is to create a monolithic standalone Trac installation using the following steps.

  1. unzip the Trac zip file.
  2. copy trac/admin/ to the 'root' directory, above the trac code library.
  3. run as if it were tracadmin, initializing a site directory mytac off the root
  4. run Apache htpasswd in the /mytrac directory for Trac admin
  5. copy trac/web/ to the 'root', above the /trac code base
  6. start as if it were tracd, with appropriate parameters and port of choice.

That is basically it, no Just unzip to a user lib, copy, configure a bit and then run it. At that point, I can go right in to the core code and savage it at will. Barring a few Python dependencies, this will not disturb anything else on the system, for instance a system-wide or virtualenv install of Trac.

There may be some more configuration issued based on your particular situation, such as the port assignment ( 8080 or whatever). Obviously, this is a solution for a localhost or VPS hosting and won't work as a public site on a shared server, but the same brutalist approach might work for cgi or fastcgi shared hosting. I now have a VPS so a public Trac standalone site is an option ... the security gods willing.

Anyway, it's working for me so far, if not in the way that the Trac developers intended. More to come ...

Edit - History - Print - Recent Changes - Search
Page last modified on September 09, 2014, at 03:13 PM