linkedin facebook twitter rss

01 Oct Methodology or Mythology

HAL 9000Call me a “nut”, but I have always been enthralled by science fiction. An image of Dave, a surprised and confused astronaut from 2001, a Space Odyssey, stays in my mind. In his eyes, I could see his brain working frantically to figure out how to master the situation, and giving way to hopelessness. The machine seemed calmly diabolical: possessed by a demon bent on destroying humanity. Yet 2001 has come and gone and computers seem far from being competent enough to outmaneuver humans in complex problem solving.  Will HAL 9000, JARVIS, Commander Data, C3P0 or the Terminator ever exist? Will your phone ever be able to chat with you intelligently when there is not another human on the line? Why has the promise of AI and automation in general fallen short of expectations (as well as fears)? I think the answer is complex and has many facets.

David Fogel (not the astronaut) said: “All our attempts to generate artificially intelligent machines have failed to realize the dreams of the pioneers in computer science who envisioned machines that could ‘think’ faster than us, without error, that never grew old, never got sick and never got tired. Our best minds have worked on building smart machines for more than five decades. Yet, we’ve not only failed to create a machine as intelligent as HAL, we’ve also failed to recognize a pathway that would lead to its production” (Fogel, 2002 p. 4).

Maybe we aren’t there yet because we haven’t done the right kind of planning or careful enough execution. Planning and the “careful” part of execution sometimes go by the wayside in technology implementation efforts, but they should not (Baumbach 2015). I have some ideas about how progress in AI and in all systems development may be accelerated through discipline – a discipline similar to software engineering methodology, but with a twist. I foresee such machines at the end of code.

Understanding Context Cross-Reference
Click on these Links to other posts and glossary/bibliography references

  

Section 8 #4

AI Apps and Processes Icon

 

Table of Context

Black Art

AI - CodeIn the early days of automated data processing, highly technical people used computers for very narrowly defined mathematical problems like calculating rocket trajectories and symbolic problems like code making and breaking. The symbolic problems, it turns out, could often be reduced to mathematical problems. As long as the problem was very narrowly defined with a small set of known inputs and a verifiable output, the effort to create the solution using operation code or a programming language such as assembly language worked out quite well with predictably valid results.

Then somebody got the idea that ordinary people could use computers for everyday problems like double-entry accounting and ordering raw materials and supplies. As the problems became more “commonplace” the solutions proliferated and simplicity (and elegance) flew out the window. This was a natural consequence of two phenomena:

  1. The inventors were writing the rules as they needed them, and the languages’ capabilities outpaced the maturity of the process
  2. Practitioners made common errors based on misconceptions, hurry or just laziness

We have learned that a great idea doesn’t always become a great computer program. Furthermore, just because the program can output the words “Hello World” doesn’t mean it is ready to speak intelligently with you. Somewhere between idea and completed product, there must be an appropriate balance of C, D and E (Competence in software development methods, Discipline in planning and Expertise in the domain).

Competence

I have found that successful projects often need different personality types in the different roles. The person who can translate the human needs into a good design may not be a good developer, and vice versa. And another person who is good at juggling scope , schedule and budget (SSB in my picture below) is usually needed to manage the effort. Not everyone can manage the complexity of a software development project. And not everyone can write a computer program more complex than “Hello World”. Here are some of the challenges of programming:

  • Programming languages are not like human languages.
  • Programming is hard to learn.
  • Software Source Code is hard to write.
  • Programming is time consuming.
  • Software Source Code is hard to read and understand.
  • Source Code is hard to debug.
  • Source Code is hard to maintain.
  • You have to think sequentially and logically to write good code.

I will leave the discussion on “competence” at this: it is not a question of having it or not as much as it is a question of harnessing what competence each contributor brings to a project to get good results along the shortest path from idea to working software. Enter methodology. As with any craft, trade or profession, there are ways to improve the quality of the work and dramatically increase the probability of success.

JOCWOL Continuum

Discipline, or Candy Store Methodology

Organizations with infinite budgets can probably afford every shiny thing that they see. They can have the whole candy store for a price. But the cost of that much candy is much higher than it appears at the outset, and more organizations are seeing that the cost of custom building automated capabilities does not necessarily lead to better outcomes nor justify the costs. The alternative: purchase or outsource most basic capabilities and build only JOCWOLthose narrowly defined capabilities that represent your significant business differentiators. The same is true with building intelligent systems: it makes sense to leverage common components from commercially available or open source systems that have already been proven to work, and focus the development effort on the really innovative functions and processes.

My kids often like to do what I did as a child: cliff diving. There are places on the St. Croix river northeast of St. Paul we have gone. Now it’s mostly a granite quarry in central Minnesota. It’s fun, but we always follow one safety rule: look at the place where you’ll be diving to make sure it’s deep enough and free of obstacles that could kill you. Software projects can be fun too, but the same rule applies. Have the necessary discipline to ensure you don’t kill the project. This is my JOCWOL Continuum.

Keeping a project within scope, schedule and budget (SSB Success in the illustration above) is very difficult, even with good methodology. Too little discipline, and the project costs will grow without necessarily adding value, and often diminishing the value of the outcome. Projects without enough discipline could crash and burn on the rocks below (a possible consequence when you jump off a cliff without looking JOCWOL). Too much discipline wraps the project in red tape until you reach “analysis paralysis”. Finding that balance is tough because it is different for every project. The sliding scales that tell where along the continuum you should be are:

  1. TraceabilityThe complexity of the problem being solved
  2. The complexity of the components that comprise the solution
  3. The maturity of the components
  4. The general depth of the team of developers (and analysts, testers and project managers)
  5. The specific experience  team members have developing the same kind of system

The lower 1 and 2 are, and the higher 3, 4 and 5 are, the less rigor is required in the planning, requirements and design phases. Every project will be optimally efficient at a different place on the JOCWOL continuum. Agile methodologies, when implemented well, can reduce risks for many projects, but the rules of discipline don’t change between agile and waterfall methodologies. Test-based development helps ensure that the code will meet the requirements, but the development of the test plans needs to be very rigorous. Traceability between requirements and test plans is a must for successful implementations.

Gantt Chart

Some suggest that the planning and analysis of a project should account for 60% or more of its entire allocated time. I believe this is true and applicable to both waterfall and agile projects.

 Expertise in AI Projects

Finally, we come to the last element: Expertise. This component is critical for successful implementations, but it is often the most difficult to come by in commercial, government and non-profit organizations. WHAT? Is Joe saying that businesses, governments and non-profits don’t have enough experts? No – I’m saying that software projects often don’t get enough attention from the experts to ensure that the developers get the code right. If the project is to develop a quoting system for sheet-fed and web print jobs, the developers need to know what are the inputs, what are the fixed data elements, what are the rules, and what are the necessary outputs. It is almost always the case that the developers, while completely expert in spinning code, know precious little about sheet-fed and web printing.

There is a second element of expertise needed in AI, and this is the twist: the need to understand the nature of the reasoning processes needed to solve the problem. Is it a problem that calls for a neural network, for case-based reasoning, or for multi-dimensional statistical modeling? Will backward-chaining or forward-chaining rules best suit the domain? Each of these approaches are fabulous, but may be completely wrong for the domain of the problem. The context of the problem could lead to radically different approaches in AI software development. For AI projects to be successful, a combination of expertise in the application domain and expertise in the human reasoning processes used by experts in the domain are needed. These combine to form the context of successful AI projects. And, by the way, establishing a context for all of these processes is needed for them to be successful – hence my blog. This combination of competence, discipline and expertise, in context, will be behind systems that are smarter than Hal 9000, and less prone to operate contrary to the intent of their designers.

Click below to look in each Understanding Context section


 

Comments are closed.