The Daniel Guggenheim School of Aerospace Engineering at Georgia Tech




The Architecture-based Innovation, Technology Evaluation, and Capability Tradeoff (ARCHITECT) Method research is focused on creating a capability-based process for systems-of-systems engineering (SoSE) which uses executable architecting to improve agility, traceability, confidence, and timeliness in early-phase acquisition decision making. The objective of this project is to research and design a process which will allow capability-based evaluation of the impacts of design decisions and tradeoffs at the system-of-systems level, and will result in the creation of a decision supports environment that will show the propagation of DOTMLPF alternative characteristics to the mission level. This research initiative is sponsored by the Office of Naval Research (ONR).

For more information about ARCHITECT, please contact Kelly Griendling (

Team Members

Research Engineer:


Research Questions

  • What is unique about SoS problems and how can systems engineering and architecture-based engineering tools be improved to help address these challenges?
  • How do you select the appropriate metrics to study in early-phase acquisition for a SoS problem?
  • How can the alternative space be fully explored and captured in a SoS?
  • What modeling and simulation is required to support quantifiable trades for architecture-level decision making, and how should it be done?
  • How should be a decision-support environment to improve early-phase acquisition decision-making be developed?
  • What measure(s) are needed to aid decision-makers in fully capturing and comparing the complexity of different SoS architectural alternatives to allow for optimization tradeoffs and cost-effectiveness comparisons?
  • What is required in an architecture framework to enable executable architecting?

The ARCHITECT Methodology

The ARCHITECT process is laid out below. This process is designed to reflect the needs of the Capabilities-Based Assessment (CBA) leading to the joint Capabilities Integration and Development System (JCIDS) Milestone A. Each step in the process is being researched and potential approaches are being identified and tested as part of this research.


Problem Formulation


In order to fully explore and test researched methods and techniques, an example problem is required for implementation. The sample problem chosen has been selected to be a generic mission attempting to resemble a joint-suppression of enemy air defenses (J-SEAD) mission. The mission has been formulated by the team and is not intended to fully reflect reality, but rather be a representative example that can demonstrate the utility of researched methods to military needs in general. As part of the problem formulation, several key tasks have been completed, including the research of aircraft carrier capabilities and performance data, the identification of and research of various aircraft carrier missions, the selection of the generic J-SEAD mission as the example problem, and the creation of a set of baseline DoDAF views to capture key elements of the example and provide a working platform to test and evaluate researched techniques (including the OV-2,OV-3,OV-5 & SV-1/SV-2). The baseline OV-5 with system overlays is shown here, demonstrating the process for the notional SEAD mission and which assets are responsible for which tasks within the SoS.

Gap Analysis


The first step in this research was to formulate a technique for performing gap analysis. Correctly identifying capability gaps is a critical first step toward identifying the DOTMLPF alternatives that will most improve the overall performance of the military. The first step in performing a gap analysis is identifying the correct set of potential or candidate gaps. This project has examined multiple gap analyses and determined that gap analyses are most successful when gaps can be quantified, but are materiel-independent. For example, in the generic J-SEAD scenario used for this research, potential gaps might include mission execution time being too long, confidence in mission success is too small, successful engagement rate is too small, too many blue units are lost, and the cost of mission execution is too high, among others. These gaps exist at the mission level, but can be matched to measurable criteria that can be used to quantify the size of the gap, such as time to complete mission, the probability of successfully engaging the required number of targets, the ratio of successful engagements to potential engagements, the number of blue losses, and the cost of mission execution. Gaps that are too generic, such as, ‘J-SEAD execution is poor’ are not sufficient for the acquisition process because there is no way to measure the ability of any given solution to close the gap. Once potential candidate gaps are identified, these gaps must be ranked. The technical solutions that are recommended as the output of the ARCHITECT process will be based partially on how well they fill the gaps existing in the as-is architecture, as well as whether they create new gaps that previously were small or non-existent. This research has identified three criteria for use in identifying and ranking capability gaps. The first criterion is gap size, which will be identified as the difference between current performance and a desired performance. The second criterion is criticality, which is the importance of that gap in mission success. This criterion is heavily influenced by the decision-maker, because this reflects the importance of this gap to the military. The third criterion is the uncertainty associated with the estimates of gap size and criticality. The uncertainty can help identify gaps which are likely to be more significant than analysis suggests, or vice versa. This is an important criterion for understanding the confidence of the results of the gap analysis. Based on these three criteria, the gaps can be ranked. The resultant ranking is dependent on the weighting of each criterion. However, in general, gaps that are large in size and very important will be ranked at the top, while those that are small in both size and criticality will fall to the bottom.

Metrics Derivation

The next step of this research was to look at metrics derivation . Two of the methods found in literature for metrics derivation were aggregated to formulate a metrics derivation technique for the ARCHITECT process. There are multiple system engineering approaches for deriving the measures or metrics necessary to assess the feasibility and viability of proposed solutions to a problem. The Practical Systems/Software Measurement (PSM) method relies on the identification of a common set of project or process issues, such as schedule and progress, resources and cost, and system performance, among others. These common goals are used as a basis for the identification of more specific project issues, which are then mapped to candidate measurement categories and appropriate candidate measures. Candidate measures are selected from a variety of standard sources, such as the INCOSE Handbook or other standards. The candidate measures are then evaluated against a set of selection criteria and this is used to select the measures to be used. (INCOSE Measurement Working Group, 1998)

The INCOSE Handbook gives the following criteria for a good metric (INCOSE, 2006):

  • It tells how well organizational goals and objectives are being met through processes and tasks.
  • It is simple, understandable, logical and repeatable.
  • It shows a trend, more than a snapshot or a one-time status point.
  • It is unambiguously defined.
  • Its data is economical to collect.
  • The collection, analysis, and reporting of the information is timely, permitting rapid response to problems.
  • The metric provides product and/or process insight and drives the appropriate action(s).

The SE measurement primer, also from INCOSE, has a similar list. It says that good measures are relevant, complete, timely, simple, cost effective, repeatable, and accurate. (INCOSE Measurement Working Group, 1998) If these lists are combined, a good measure would be relevant, complete, timely, unambiguous, logical, simple, cost effective, repeatable, and accurate. This list will provide a basis for assessing the utility of measures chosen for study. Another systems engineering approach for metric and measure derivation is the Goal/Question/Metric (G/Q/M) approach. The G/Q/M approach consists of four basic steps. First, an information goal is identified. Next, a set of questions are developed to evaluate if the information goal is being met. Third, the measures, both direct and indirect, required to answer the questions are indentified as well as the means to collect them. Finally, the measures are applied and their usefulness is evaluated. If they were unable to fulfill the information goal fully, new measures are selected and the process is repeated. (INCOSE Measurement Working Group, 1998) However, both of these methods have some limitations. G/Q/M does not provide a step to evaluate the goodness of the candidate measures. However, PSM provides less structure and traceability in decomposing top level goals into measures of interest. In order to rectify these limitations, G/Q/M can be combined with PSM to provide a metrics derivation approach for SoS. Because the measures are being used to evaluate alternative architectures, the three pillars of successful architecture (structure, utility, and beauty), can be used as common set of information goals. Then, for the specific problem being addressed, a set of questions are developed for evaluating each of these with respect to the specific mission and resource needs.

Questions of organization assess the feasibility of the architecture and the proposed structure. This includes questions such as: Are the elements compatible? Do they perform the desired mission? Does the organizational structure make sense? The answers to these questions can act as a filter to weed out infeasible architectures quickly and prior to spending time evaluating other aspects of the architecture.

Questions of utility assess the performance of the architecture against the desired mission. For example, these questions might be: How well does the architecture accomplish the mission? How efficiently does the architecture accomplish the mission? This gives a way to assess whether an architecture meets performance thresholds (or, if using JCIDS terminology, KPPs).

Attractiveness questions cover the programmatic aspects of the architecture. The answers to these questions are the basic selling points for selecting a particular architecture. This includes questions such as: Is the architecture affordable? Is the architecture efficient? Is it easy to use? What is the risk associated with development of this architecture? Would a user prefer it over other alternatives? (and why?) Is the development schedule acceptable? These measures are used to differentiate between architectures which have similar (or equally desirable) performance.

The set of questions presented here are examples of representative questions, and would need to be developed into a more specific set of questions for a particular project. However, these questions can then be used to develop a set of candidate measures. These candidate measures are then evaluated against the criteria for a good metric (relevant, complete, timely, unambiguous, logical, simple, cost effective, repeatable, and accurate) to determine which metrics to carry forward into the study. If at the end, however, the selected metrics are found inadequate to answer the questions developed from the information goals, the process should be repeated to derive any additional metrics that may be required. The output of this process is the first draft of the DoDAF SV-7. Applying this technique to the representative example problem being used for this research results in a list of appropriate metrics that help to quantify whether or not a particular capability gap has been filled.

Alternative Identification

The next step in the ARCHITECT process is to identify candidate architectural alternatives to fill the capability gap. In this task, the team researched an evolutionary approach to identifying alternative architectures, began testing approach via a python implementation, and published a conference paper on the identified approach (Griendling and Mavris, An Architecture-based Approach to Identifying System-of-Systems Alternatives, IEEE SoSE Conference Proceedings, 2010). The method formulated for identifying alternatives is based on the fundamental idea that alternatives can be represented by changes to one or more of the architecture products. Because some of these changes can be very complicated, brainstorming alternatives can be a difficult and time-consuming activity. Additionally, while some alternatives can be represented in an IRMA or matrix of alternatives (MoA), other alternatives are more difficult to represent in this fashion due to their diversity and complexity. For example, alternatives that involve replacing systems used at given operational nodes can be easily represented in a MoA, while alternatives that involve the re-sequencing of operational activities are more difficult. Since these changes are represented through changes to architecture products, it is not unreasonable to think that the architecture products themselves can aid in the brainstorming and generation of alternatives. Structured and methodical manipulation of the baseline DoDAF products could allow a greater number of alternatives to be identified more quickly. For example, when looking at an OV-2, the following questions could be used to inspire the generation of new alternatives:

  • Is there a way any of these nodes could be deleted or combined while still performing the mission successfully?
  • Is there a way nodes could be broken apart of new nodes could be added to potentially enhance performance?
  • Could the information needs between nodes be rearranged?

It would then be necessary to examine the impact that each proposed change had on downstream products. This would involve the examination of possible alternatives contained within the propagation of changes to other products. However, if a similar series of questions were to be generated for all products containing the level of detail required for early phase acquisition, a structured method for alternative generation could be created that would allow alternatives to be generated across the DOTMLPF spectrum. It has been the observation of the author that for a large SoS problem, using a visual, DoDAF-based method for generation of alternatives is more effective than using a textual-based method. The drawback to this type of method is that documentation of the alternative space can quickly become large. Therefore, it is important to associate descriptive names with each alternative so that alternatives of interest can be quickly located if needed. It is also important to note that many of the alternatives will span more than one category of the DOTMLPF spectrum. For example, a change to doctrine will require retraining personnel. Implementing a materiel change, such as implementing a new system may require changes to doctrine in order to utilize the new system more effectively. Therefore, it may not be possible to categorize alternatives cleanly into the categories provided. An example of an alternative is shown to the right.

Alternative Evaluation


In order to enable zero-order alternative evaluation, research has been done to develop a capability-focused modeling language. So far, this research has resulted in the formulation of a method to distinguish model metric types into mission independent and mission dependant metrics. Mission independent metrics are those that can be modeled independently of the system-to-task mapping (e.g. schedule risk and procurement costs). Mission dependent metrics are those that are dependent on the system-to-task mapping (e.g. probability of mission success and time to complete mission). Additionally, a method has been formulated that allows for rapid automatic generation of novel alternative architectures. This method allows for the automatic generation of feasible combinations of different systems while taking into account possible novel and revolutionary systems that can be included in the architecture as well. This is done by creating a hierarchy of tasks describing how separate tasks can be linked but not the actual task-to-task links. System-to-task mappings are similarly described. The modeling tool can then automatically generate different linking structures to instantiate a distinct alternative architecture. Finally, a framework to describe different computational models of metrics (cost, performance, risk, etc.) to translate an alternative architecture into an executable architecture has been researched and implemented. The implementation of this methodology and subsequent execution on the cloud has led to the ability to speed up quantitative alternative analysis enough that it is now feasible to explore the full architectural alternative space early in the design process. This allows for an initial down selection to a subset of alternatives that are carried forward into the higher fidelity modeling. The methodology behind the capability focused modeling framework is shown here.

In order for these architecture alternatives to be analyzed and compared using a higher fidelity modeling environment, a method for translating architectures (in product form) into modeling and simulation environments is needed. Because of the high number of alternatives available, a way to automate or semi-automate the process of inputting these alternatives into a modeling and simulation environment for evaluation is needed. There are several research topics that have been pursued by this project within this area. First, DoDAF views were analyzed to determine which views contained which pieces of information. Then, different modeling and simulation techniques were researched to determine which ones could provide the metrics required by the metrics derivation phase of the process, and what inputs were required for these models. Then, the model inputs were matched to the DoDAF view outputs so that each model could be grouped with an associated set of DoDAF views to act as the input to the model. This categorization scheme provides the basis for formulating a framework to create executable DoDAF products. This second level of executable architectures will be supported by higher fidelity modeling using discrete event simulation and Markov chains. This research area is still in its early phases.

Decision Support

The final area of research in the executable architecting category has involved exploring what information would be most helpful to decision makers when acquiring or evolving a SoS, and how that information should be presented. To date, two specific areas have been explored. First, research into characterizing and measuring architectural complexity for use in tradeoff analyses has been conducted. This has resulted in the creation of a complexity measure for military SoS architectures. The architecture complexity framework equation is given below.

Complexity framework.png
Complexity table.png

Secondly, research into the use of Real Options Analysis for use in acquisition-level decision support was conducted. An option is defined in finance as the right, but not the obligation, to take an action (e.g. deferring, expanding, contracting, or abandoning) at a predetermined cost and for a predetermined time. A ‘Real Option’ means the option now pertains to a physical or tangible asset, such as equipment, rather than financial instruments. Real Options provide the necessary framework for strategic decision making and results in an improved valuation methodology to aid in an analysis of alternatives. The methodology conceptually and visually combines key SoS architectural information and programmatic parameters that impact the likelihood of success:

  • Measured architectural complexity
  • Performance levels likely to be achieved within different time frames
  • Different programmatic risk categories such as technical, system integration, design, production, and business
  • Uncertainties in the estimates

The end result is the development of an Acquisition Option Space, or AOS, that aids decision makers in determining the value of each alternative under investigation, or whether the investment of additional project resources represents a worthwhile endeavor.



  • Domercant, Jean. ARC-VM: An Architecture Real Options Complexity-based Valuation Methodology for Military System-of-Systems Acquisition Ph.D. Thesis. Georgia Institute of Technology. December, 2011
  • Griendling, Kelly. ARCHITECT: The Architecture-based Technology Evaluation and Capability Tradeoff Methodology Ph.D. Thesis. Georgia Institute of Technology. December, 2011
  • Domercant, Jean; Mavris Dimitri. Measuring the Architectural Complexity of Military Systems-of-Systems 2011 IEEE Aerospace Conference in Big Sky, MT
  • Griendling, Kelly; Mavris, Dimitri. Development of a DoDAF-based Executable Architecting Approach to Analyze System-of-Systems Alternatives 2011 IEEE Aerospace Conference in Big Sky, MT
  • Chou, Shuo-ju; A Conceptual Methodology for Assessing Acquisition Requirements Robustness against Technology Uncertainties Ph.D. Thesis, Georgia Institute of Technology, May 2011
  • Griendling, Kelly; Mavris, Dimitri. An Architecture-based Approach to Identifying System-of-Systems Alternatives 5th Annual IEEE SoSE Conference. Loughborough, UK, June 22-24 2010
  • Bagdatli, Burak; Griendling, Kelly; Mavris, Dimitri. A Method for Examining the Impact of Interoperability on Mission Performance in a System-of Systems 2010 IEEE Aerospace Conference. Big Sky, MT
  • Griendling, Kelly; Mavris, Dimitri. A Process for System of Systems Architecting 2010 AIAA Aerospace Sciences Meeting in Orlando, FL
  • Griendling, Kelly. Chapter 11: DoDAF and MODAF Artifacts in Architecture and Principles of Systems Engineering Book Authored by Dr. Dimitri Mavris and Dr. Charles Dickerson, Auerbach Publications, 2010