ABCD |
The ABCD method is based on a toolset of conversation, invention and intent and these tools combine to form the intellectual heart of design thinking i.e., solutions are derived through creativity and community rather than cause and effect. The tool supports strategic decision making and provides traceable outcomes to a clear scope. The tool has slightly different objectives depending on the level it is used i.e., strategic or organisation level to allow strategic thinking about the whole and generate strategic intent, or at a project level to derive business intent and specific items of information relevant to a project. |
Accidental Architecture |
Accidental Architecture is a general term for a de facto approach to technology that develops over time as a result of not having a coherent strategy. This represents an ongoing legacy of point-to-point integration solutions, uncontrolled systems functionality overlaps, unnecessarily duplicated infrastructure, high cost of IT and slow time-to-market. An accidental architecture represents a brittle, rigid IT environment that is not cohesive and cannot cost-effectively withstand new additions or changes to the environment. |
ACID |
Atomicity, Consistency, Isolation, Durability (ACID) is a set of properties that guarantee that database transactions are processed reliably. In the context of databases, a single logical operation on the data is called a transaction e.g., a transfer of funds from one bank account to another, even involving multiple changes such as debiting one account and crediting another, is a single transaction. The chosen initials refer to the acid test. |
Active Directory |
Active Directory is a Microsoft product that provides a central location for network administration and security. |
Activity |
Activity is the fourth level in the F2A process hierarchy framework and sits between the process/sub-process and task levels. An activity is the lowest level of process flow decomposition and must be confined to a single service container. Only the activity contains tasks. |
Agnostic |
Agnostic services are not aware of the context in which they are being called, nor are they aware of how the service is implemented, which platform, technology etc. Non-agnostic services can have one or more forms of coupling or context (i.e., process functional context). |
Algorithm |
An algorithm is a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer. |
APCA |
Australian Payments Clearing Association (APCA) is the self-regulatory body established by the payments industry to improve the safety, reliability, equity, convenience and efficiency of the Australian payments system. |
API |
An Application Programming Interface (API) is traditionally used for EAI and RPC, however more recently adopted for providing gateways for mobile applications to use. APIs have now become almost synonymous with Web Services. |
API Endpoint |
An API endpoint is one end of a communication channel. When an API interacts with another system, the touchpoints of this communication are considered API endpoints. This can include a URL of a server or service. |
API Gateway |
An API gateway is an API management tool that sits between a client and a collection of backend services. An API gateway acts as a reverse proxy to accept all application programming interface (API) calls, aggregate the various services required to fulfill them, and return the appropriate result. |
Application Component |
An Application Component is designed and developed to support a specific boundary layer consumer. Each operation within the application component is designed and developed to support a specific user interface or batch control step. Application component re-use is possible but not essential. |
Artificial Intelligence |
Artificial Intelligence (AI) is (apparent) intelligence exhibited by machines. The field of AI develops methods and systems that allow machines to perceive their environment and use learning and advanced search techniques to maximise their chances of achieving defined goals. |
ASIC |
Australian Securities and Investments Commission (ASIC) is Australia’s corporate, market and financial services regulator. It is an independent Commonwealth Government body which administers the Australian Securities and Investments Commission Act (ASIC Act). |
Asynchronous (Communications) |
Asynchronous communication does not require all parties involved in a communication to be present and available at the same time e.g., the receiver of an email does not have to be logged on when the sender sends a message; discussion boards which allow conversations to evolve and develop over a period of time; and text messaging using mobile phones. |
Asynchronous (Integration) |
Asynchronous is defined as “not existing or occurring at the same time”. In the integration context it means the processing of a request occurs at an arbitrary point in time, after it has been transmitted from client to service. It is non-blocking, which means the client can proceed with other activity, and does not wait for a response. |
ATO |
Australian Taxation Office is the Australian regulatory body for all types of national/Federal taxation |
Atomicity |
Atomicity is a binary property of transactions which are guaranteed to either complete successfully or have no effect. |
Authentication |
Authentication is the process by which one principal verifies with another principal that they are who they claim to be. In computer security, authentication is the process by which a computer program or another user attempts to confirm that the computer, computer program, or user from whom the second party has received some communication is, or is not, the claimed party. Typically, the verifying party is presented with a set of credentials in the form of a PIN/password, shared secret or certificate. |
Authorisation |
Authorisation is the act of determining access to a resource. Once an entity is authenticated, it requires access to a resource, but it must be authorised by control information associated with the resource. Authorisation protects computer resources by only allowing those resources to be used by accessors that have been granted authority to do so. Resources include individual files or items data, computer programs, computer devices and functionality provided by computer applications. |
Autonomy |
Autonomy is a scalar property of services that defines how much control that service has over its environment and resources to perform its task within its defined service level parameters. Microservices for instance are typically architected to have a high level of autonomy. A notification service is an example of a service that might have a low requirement for autonomy |
Balanced Scorecard |
Balanced Scorecard is a strategy performance management tool and semi-standard structured report, supported by design methods and automation tools that are used to track the execution of activities and monitor the ensuing consequences. It is regarded as a performance metric used in strategic management to identify and improve various internal functions and their resulting external outcomes. The balanced scorecard attempts to measure and provide feedback to organisations in order to assist in the implementation of strategies and objectives. This management technique isolates four separate areas that need to be analysed • learning and growth • business processes • customers, and • finance. Data collection is imperative to providing quantitative results, which are interpreted by managers and executives and used to make better long-term decisions. |
BAM |
Business Activity Monitoring (BAM) refers to the aggregation, analysis and presentation of real time information (or near real time information) about business processes. BAM summarises process information on dashboards that contain key performance indicators used to provide assurance and visibility of activity and performance. A drill down capability allows an individual process instance to be explored in detail. For business processes that are tracked, BAM provides a full end to end view of all activities conducted during the execution of that process and reports the achievement of service levels and any failures. BAM also serves and assists the process modelling function by the visualisation of processes. |
BAU |
Business as Usual (BAU) is a standard “day-to-day” requirement to support customers and deliver regulatory and other incremental (rather than transformational) change. |
BIA |
Business Impact Analysis (BIA) predicts the consequences of disruption to a business function/process and gathers information needed to develop recovery strategies. Potential loss scenarios should be identified during a risk assessment. Identifying and evaluating the impact of disruptions provides the basis for investment in recovery strategies and prevention and mitigation strategies and is an essential component of an organisation's business continuity planning; it includes an exploratory component to reveal vulnerabilities; and a planning component to develop strategies for minimising risk. |
Big Data |
Big Data refers to the storage, processing, and analysis of datasets that are too large or complex to be dealt with by traditional data-processing systems. The use of Big Data can lead to improved statistical analysis (due to the large sample size), visualisation, and predictive analysis. |
BIR |
Bureau of Internal Revenue (BIR) is the Philippines regulatory body for taxation, equivalent to the US Internal Revenue Service (IRS) or Australian Taxation Office (ATO) |
Bit Stream |
A bit stream is a contiguous sequence of bits, representing a stream of data, transmitted continuously over a communications path, serially or one at a time. |
Black Box Testing |
Black box testing is a method of software testing which is used to test the software without knowing the internal structure of code or program. |
Boundary Component |
Boundary Components represent reusable ‘building blocks’ from which user interfaces and boundary services are constructed. They are designed for a specific interface application and are specific to a particular consumer. Boundary components are often bundled into frameworks (e.g. React; Angular) or Software Development Kits (SDKs) |
Boundary Layer |
A Boundary Layer is a concept within the Construction and Integration Architecture domains which provides the input channel for solutions. It performs minimal processing of the data it is capturing and then invokes functionality from the Service Implementation Layer that is made available via the Integration Layer. |
Boundary Service |
A Boundary Service is designed and developed to support a specific boundary layer consumer. Each operation within the boundary service is designed and developed to support a specific user interface or batch control step. Boundary service re-use is possible but not essential. |
BPAY |
BPAY is a service that allows the payment of bills using credit cards or electronic transfer from bank accounts e.g., via Internet or phone banking. |
BPD |
Business Process Discovery (BPD) is related to process mining and is a set of techniques that automatically construct a representation of an organisation’s current business processes and its major process variations. It provides a collection of tools and techniques to define; map and analyse existing business processes. This analysis provides a baseline for process improvements and identifies key problem areas to be addressed. Process discovery tools and techniques can be manual or automated, including the use of business intelligence (BI) and business analytics. It does this by providing an explicit process view to current operations, and analytics of that process model to help identify and action business process inefficiencies, or anomalies. |
BPEL |
Business Process Execution Language (BPEL) enables task sharing by the standardisation of business processes in order for them to interconnect and share data. It is a standard executable orchestration language for specifying actions within business processes with Web Services which involves message exchanges with other systems. Processes in BPEL export and import information by using Web Service interfaces exclusively. |
BPM |
Business Process Management (BPM) is a management approach focused on aligning all aspects of an organisation with the wants and needs of clients. It is a holistic management approach that promotes business effectiveness and efficiency while striving for innovation, flexibility, and integration with technology. Business process management attempts to improve processes continuously. It could therefore be described as a "process optimisation process." BPM enables organisations to be more efficient, more effective and more capable of change than a functionally focused, traditional hierarchical management approach. |
BPMN |
Business Process Modelling and Notation (BPMN), a standard maintained by the Object Management Group (OMG), is a graphical representation for specifying business processes in a business process model. It is a standard method of illustrating business processes in the form of a diagram similar to a flowchart. A diagram in BPMN is assembled from a small set of core elements, making it easy for technical and non-technical observers to understand the processes involved. Elements are categorised into three major groups called flow objects, connecting objects and swim lanes. Flow objects, denoted by geometric figures such as circles, rectangles and diamonds, indicate specific events and activities. Flow objects are linked with connecting objects, which appear as solid, dashed or dotted lines that may include arrows to indicate process direction. Swim lanes organise diverse flow objects into categories having similar functionality. |
BPMS |
Business Process Management Suite (BPMS) is a technology tool for automating all the key steps in business process management activities such as design, modelling, execution, monitoring, and optimisation steps. |
Business Capability |
A business capability is a grouping of related services/functionality, or sub capabilities, that share similar people and technology needs. Business capabilities are enterprise level concepts that focus on delivering outcomes. The generic expression of business capabilities allows the BCM to remain relevant during organisation, process and technology changes. Individual business capabilities reflect areas that: • Are largely self-contained; • Have specific interfaces; • Are large enough to track to a set of outcomes; • Encompass requirements for people, process and technologies that are highly related; • Encapsulate the data they own; • Own all processes entirely satisfied by the services they own; • Are a mixture of people, process and technology. |
Business Capability Model |
A Business Capability Model (BCM) describes “what” the business does and includes all business capabilities an organisation requires to operate effectively. The model is relatively stable and only changes if the organisation enters (or leaves) a line of business. Due to the stable nature of the BCM, it is a more suitable model of the business on which to base an organisation’s IT architecture than either a process model or organisation structure, which both undergo constant change. |
Business Continuity Management |
Business Continuity Management is a management process that identifies risk, threats and vulnerabilities that could impact an organisation’s continued operations and provides a framework for building organisational resilience and an effective response. Some industries, especially those which have public interest such as financial institutions, are required by regulation to provide a certain level of protection to their data and services. The objective of Business Continuity Management is to make the organisation more resilient to potential threats and allow it to resume or continue critical services and associated assets under adverse or abnormal conditions. This is accomplished by the introduction of appropriate resilience strategies to reduce the likelihood and impact of a threat and the development of plans to respond and recover from threats that cannot be controlled or mitigated. |
Business Entity |
A business entity is a self-contained, functional component of a business, often a legal construct such as a company, partnership or association. |
Business Event |
A Business Event is an event that has been produced by a component in the Integration Platform. It expresses events that are meaningful to the business. |
Business Intelligence (BI) |
Business Intelligence (BI) refers to the technologies, applications, and practices used to collect, analyse, and present data in a way that enables informed decision-making and strategic planning within an organisation. BI involves gathering data from various sources, transforming it into meaningful and actionable insights, and presenting it in the form of reports, dashboards, visualisations, or interactive tools. It encompasses data integration, data modelling, data analysis, and data visualisation techniques. |
Business Layer |
The Business Layer is a concept within the Process and Integration Architecture domains which provides Business Processes and Business Service logic. |
Business Logic |
Business Logic is custom rules or algorithms that handle the exchange of information between a database and user interface. It consists of business rules, which are policies that govern various aspects of a business, and workflows, which are sequences of steps that specify in detail the flow of information or data |
Business Process |
A Business Process is a collection of related, structured activities or tasks that produce a specific service or product (serve a particular goal) for a particular customer or customers. It can often be visualised with a flowchart as a sequence of activities with interleaving decision points. A business process can be decomposed into sub-processes, which have their own attributes, but also contribute to achieving the goal of the super-process. The analysis of business processes typically includes the mapping of processes and sub-processes down to task level. The generic term business process can be used for any level within the process hierarchy framework with the exception of the lowest level, a task. In other words, value chains, value streams, processes and activities are all business processes, just at different levels of decomposition. |
Business Process Choreography |
A business process choreography is a type of process but differs in purpose and behaviour from a standard orchestrated process (which is more familiar to most process modellers) that defines the flow of activities an organisation executes. In contrast, choreographies formalise the way business participants in different organisations coordinate their interactions. The focus is not on orchestration of the work performed by these participants, but on the exchange of information (messages) between these them. Process choreography does not have a central control mechanism and, therefore, has no mechanism for maintaining any central process data and no overall controller (Business Process Execution Engine). When writing choreographies during the design of a “system” that spans across organisational boundaries, it is necessary to establish a clear agreement of how the various parties will interact with each other. The choreography serves to establish an agreement between the multiple stakeholders and allows them to understand how their systems will interact. Unlike process orchestrations, process choreographies do not get implemented in any tool as they are a design time concept only. The outcome of designing process choreographies is typically a multi-party agreement (contract) and each party then goes and implements process orchestrations (or sometimes just services) to implement their part of the choreography. In summary, for process choreography: • The overall process behaviour emerges from the working of its parts (bottom up). No global perspective is required; • Complex inter-organisational processes are decomposed into autonomous parts where each controls its own agenda (and in fact may be implemented as a process orchestration); • Easily maps to event and agent-based systems; • Is usually more difficult to start, but often easier to scale to complex processes; • Is supported by the WS-CDL standard; • Graphical representations can be derived from the overall interaction, i.e., form follows function; • Is not supported by any execution engine such as BPMS. Interaction between banks and intermediary origination platforms (e.g., NextGen) are examples of process choreographies. For more detail on the difference between process orchestration and process choreography refer to the definition of Business Process Orchestration. |
Business Process Design |
Business process design describes “how” the business operates and is used to develop efficient business process practices. It consists of business process principles, process categories and process execution hierarchies that help realise the full potential of its business capabilities. It is relatively dynamic and can change frequently, especially for organisations that undergo continuous process improvement or frequent business process re-engineering. |
Business Process Model |
A business process model is a model of workflows that provides a graphic description of an end-to-end business process. Business process modelling is an engineering exercise to represent processes, so that current processes may be analysed and improved. Process models are typically produced by business analysts or business architects seeking to improve process efficiency and quality. It includes detailed documentation of the business rules, activity processing times, and other descriptive process information. The process diagram is the core concept within process documentation. |
Business Process Orchestration |
A business process orchestration is the most common approach to automated process execution where you define the sequence of steps within the business process, including conditions and exceptions, and then use a BPMS to execute the sequence at runtime. All aspects of the process are controlled by the BPMS – even where those steps are outsourced, the organisation retains control of the overall process execution. Today’s standards for business process orchestration include BPMN (business process modelling notation) for defining the visual representation of the sequence, and BPEL (business process execution language) as the ‘code’ that executes the sequence. In summary, process orchestration: • Defines a single master that controls all aspects of a process (top-down approach) • Supports a graphical view of the sequence • Is usually simpler to start with; but often harder to scale to more complex inter-organisational interactions • Is driven by the graphical sequence model, i.e., function follows form • Is supported by the BPMN and BPEL standards • Is supported by the majority of BPMS |
Business Rule |
A business rule defines or constrains some aspect of business and always resolves to either be true or false. Business rules are intended to assert business structure, or control and influence the behaviour of the business. Business rules describe the operations, definitions and constraints that apply to an organisation. Business rules can apply to people, processes, corporate behaviour and computing systems in an organisation, and are put in place to help the organisation achieve its goals. There are three high level classes of business rules: • Structural Rules – Core rules that define what is needed for the organisation to operate and often drive its information architecture needs. • Decision Logic Rules – Rules at the core of what is typically referred to as the business logic. When a business decision needs to be made (e.g., whether a person qualifies for a mortgage) the business rules are the individual statements of business logic that determine the result of the decision (refer Decision Logic Rules below for more information). • Process Flow Rules – Rules that purely direct the movement through a process flow or workflow etc. |
Business Service |
A business service is a component service designed to implement a specific business function. The functionality is typically meaningful to the business. |
Canonical Data Format |
The canonical data format is the format used to described the schema used for the Canonical Data Model. Possible formats include XML Schema Definition (XSD) and Javascript Object Notation (JSON) |
Canonical Data Model |
The Canonical Data Model is a design pattern based on the Common Data Model, used to create messages on the Integration Platform, and to transform between different data formats. |
CDC |
Change Data Capture (CDC) is a method of replicating data from one database to another, by capturing the low-level changes made against the source database and applying them to another database. |
CDO (Data) |
See Chief Data Officer. |
CDO (Digital) |
Chief Digital Officer (CDO, CDiO). See also Chief Information Officer. |
CEP |
Complex Event Processing (CEP) is a method of tracking and analysing (processing) streams of information (data) about things that happen (events) and deriving a conclusion from them. CEP combines data from multiple sources to infer events or patterns that identify meaningful events (such as opportunities or threats) which allows for a heightened response. |
Chief Data Officer (CDO) |
The Chief Data Officer (CDO, CDaO) is responsible for the effective and efficient use and management of an enterprise's data holdings. A CDO is not a technological role and is not focused on the technical side of data (technology is the remit of the Chief Information Officer). The CDO aims to understand the strategic value of data, the safeguards required to protect it, and what's required to maximise its use. |
CIA |
Confidentiality, Integrity, Availability (CIA) is a model designed to guide policies for information security within an organisation. In this context, confidentiality is a set of rules that limits access to information: integrity is the assurance that the information is trustworthy and accurate; availability is a guarantee of ready access to the information by authorised users. The model is sometimes known as the CIA triad. |
CIO |
The Chief Information Officer (CIO) is responsible for an organisation's Information Technology. An organisation's CIO must ensure that the use of information technology in the organisation meets business goals and supports business functions. |
Cloud Computing |
Cloud computing is a style of computing in which scalable and elastic IT-enabled capabilities (examples include compute, network, storage, platform) are delivered as a service using Internet technologies. |
CMDB |
A Configuration Management Database (CMDB), also known as a Configuration Management System (CMS), is a fundamental component of the ITIL Framework and is a unified or federated repository of information related to all the components of an information system. It helps an organisation to understand the relationships between these components and track their configuration. The CMDB records configuration items (CI) and details about the relationships between CIs. |
CMMN |
Case Management Model and Notation is a graphical notation used for capturing work based on cases requiring various activities performed in an unpredictable order in response to emergent situations. Using an event-driven approach and the concept of a case file, CMMN extends what can be modelled with BPMN, including less structured work driven by knowledge workers. |
CMS (Information) |
A Content Management System (CMS) is software used to manage the creation and modification of digital content. A CMS is typically used for enterprise content management (ECM) and web content management (WCM). ECM typically supports multiple users in a collaborative environment by integrating document management, digital asset management, and record retention. Alternatively, WCM provides collaborative authoring for websites and may include text and embed graphics, photos, video, audio, maps, and program code that display content and interact with the user. ECM typically includes a WCM function. (From https://en.wikipedia.org/wiki/Content_management_system) |
CMS (Operations) |
A configuration management system (CMS or CMDB) is a fundamental component of the ITIL Framework and is a unified or federated repository of information related to all the components of an information system. It helps an organisation to understand the relationships between these components and track their configuration. The CMS records configuration items (CI) and details about the relationships between CIs. |
COBIT |
Control Objectives for IT (COBIT) is a framework for developing, implementing, monitoring and improving IT governance and management practices. |
Combined Team |
Combined Team – A team of data specialists and data custodians. It provides an excellent basis to build co-operation and trust between the business units and IT and across business silos. · Work in collaboration to understand current issues, potential problems and opportunities · Build and share knowledge and learning · Balance individual need with the common good · Champion Enterprise Architecture and data management · Act as directed by the Data Governance council to ensure resources are directed for maximum business benefit. |
Common Data Model |
A Common Data Model (CDM) is an enterprise-wide data model that exists at three levels: conceptual, logical, and physical. At the conceptual level, it identifies the main entities of interest; at the logical level the entity list is expanded, and attributes and relationships added; at the physical level it is tied to the implementation platform, is fully normalised, and contains data types, indexes etc. The Common Data Model is the superset of all Subject Areas. |
Common Warehouse Meta Model |
Common Warehouse Meta Model defines a specification for modelling metadata for relational, non-relational, multi-dimensional, and most other objects found in a data warehousing environment. |
Component Model |
A component model describes the dependencies and interactions between components and represents the implementation of the service within the Service Model. |
Composite Service |
A composite service is one that is comprised of one or more lower-order services. |
Conceptual Data Model |
A conceptual data model is a high-level graphical representation of the key data items used in the business and the interrelationships between them. |
Configuration Item (CI) |
A Configuration Item (CI) is an instance of an entity that has configurable attributes: for example, a computer, a process, or an employee. Configuration items that are IT assets, or a combination of IT assets, may depend on and/or have relationships with other IT processes. A CI may have attributes that are hierarchical and that, together with the relationships between CIs, will be captured within a central database, such as a CMDB. |
Connector Service |
A connector service is a service that is responsible for connecting services between layers. For example, a system service is often responsible for connecting an Enterprise Service with the underlying solution. Its primary function is to translate between business language (the CDM) and application language. |
Construction Architecture |
Construction Architecture forms part of the base documentation of the Fragile to Agile IAF. It describes the Strategic Construction Architecture and details the core concepts and principles when building and constructing technology solutions. |
Consumer Container |
A consumer container groups together services that have been developed for a specific service consumer. Services in this container are also typically bound to a single Service Container. |
Cookie |
A cookie is a small piece of data that is placed on a user’s hard drive by some websites which acts as a form of authentication to identify the user when the user next visits the same website. |
COTS Applications |
Commercial Off The Shelf Applications (COTS) are solutions and applications which are manufactured commercially and may be specifically configured for organisations. They vary in size and complexity from small niche applications targeting specific capabilities to large scale ERP (Enterprise Resourcing Planning) systems. |
Coupling |
Coupling refers to a connection or relationship between two things. The principle of Service Loose Coupling promotes the independent design and evolution of a service's logic and implementation while still guaranteeing baseline interoperability with consumers that have come to rely on the service's capabilities. • Services or components that are decoupled can be evolved or enhanced without impacting their consumers. • An Event Driven Architecture enables the most decoupled participants. • A point-to-point interface can be considered the most coupled. |
CQRS |
Command Query Responsibility Segregation – an integration pattern whereby reading data and writing data occur within different models optimised for each activity |
CRM |
Customer Relationship Management (CRM) is a category of enterprise software that includes a broad set of applications to help organisations manage customer data and interaction; access business information; automate sales; provide marketing and customer support and manage employee, vendor, and partner relationships. |
Cross-Channel Strategy |
A cross-channel strategy uses one marketing channel (such as direct mail or internet) to support or promote another channel (such as retailing). Cross-channel is often designed to be experienced fragmentarily i.e.; no single medium provides the full package. |
CRUD |
Create, Read, Update, Delete (CRUD). The four basic operations performed on data in a database or information system. Create: Inserting or adding new data records. Read: Retrieving or accessing existing data in the database. Update: Modifying or updating existing records in the database. Delete: Removing data records from the database, making them permanently inaccessible. |
Cryptography |
Cryptography is a method of storing and transmitting data in a specific form so that only those for whom it is intended can read and process it. It is most often associated with scrambling plaintext (ordinary text, sometimes referred to as clear text) into cipher text (a process called encryption), then back again (known as decryption). |
CTI |
Computer Telephony Integration (CTI) is a common name for any technology that allows interactions on a telephone and computer to be integrated or coordinated. The term is predominantly used to describe desktop-based interaction for helping users be more efficient. It may also be referred to as server-based functionality such as automatic call routing. |
Customer Value Journey |
A customer value journey is the highest level within the Fragile to Agile process execution hierarchy. It covers all activities that are needed to deliver a specific outcome that a customer needs, including value streams within a journey that the organisation does not participate in. It must contain at least one value stream that the organisation provides to the value journey, but in many cases, it will only be one. An example of a Customer Value Journey is “Purchase a Home” and a bank might participate in various value streams within that value journey including, but not limited to, “Provide Financial Advice” and “Originate a Mortgage”. |
Dashboard |
A dashboard, in an information system, is a form of graphical user interface that provides an at-a-glance, single-page view of data for a subject or process(es). Dashboards are often provided via a Web browser. |
Data |
Data is a collection of discrete values (datum). These values may be facts, quantities, statistics, measurements, text, or other basic, atomic units. They are raw, uninterpreted, and without meaning. See also information. |
Data Affinity Analysis |
Data affinity analysis, sometimes referred to as data clustering analysis, considers what information is required to deliver a service/perform a function in order to determine the appropriate boundary of capabilities on a Business Capability Model. It achieves this by overlaying the Common Data Model ensuring the data with a strong relationship (affinity) is contained within the same service container. |
Data Analyst |
Data Analysts are responsible for data collection and interpretation. They know where the data is, where it fits into the BCM and CDM, its format, accessibility and quality. Converts data into tangible insights by using visualisation and storytelling skills. |
Data Analytics |
Data analytics refers to the process of examining and analyzing large volumes of data to uncover valuable insights, patterns, and trends. It involves applying various statistical and computational techniques to interpret and extract meaning from data. |
Data Architect |
A Data Architect is responsible for designing, creating, deploying, and managing an organisation’s data architecture. They will define the structure of models, policies, rules, and standards that govern the data that is collected and how it is stored, arranged, integrated, and put to use in data processing systems and organisations. |
Data Architecture |
Data Architecture is composed of models, policies, rules, and standards that govern how data is collected, stored, arranged, integrated, and used within an organisation's data processing systems. |
Data Cardinality |
Data cardinality refers to the uniqueness and variability of values within a data set or a column of a database table. It indicates the number of distinct values present in a particular data field. |
Data Catalog |
A Data Catalog, or Data Catalogue, creates and maintains an inventory of data assets through the discovery, description and organisation of distributed datasets. The data catalog provides context to enable data custodians, data analysts, business analysts, data engineers, data scientists and other line of business (LOB) data consumers to find and understand relevant datasets for the purpose of extracting business value. (source: Gartner, from https://www.gartner.com/en/documents/3837968 (paywall)) |
Data Cleansing |
Data cleansing (data cleaning, data scrubbing) is the process of identifying and correcting or removing errors, inconsistencies, inaccuracies, and duplications in a dataset. Data cleansing involves various techniques and tools to ensure data quality and integrity. It includes tasks such as removing irrelevant or redundant data, correcting misspellings or formatting errors, standardising data formats, resolving inconsistencies, and validating data against predefined rules or constraints. |
Data Consistency |
Data consistency refers to whether the same data kept at different places do or do not match. Multiple copies of the same information introduce redundancy, and with redundancy comes reliability. • Eventual consistency ensures that each copy of the data becomes consistent at a later point in time. The time taken by the copies to become consistent may or may not be defined. Eventual consistency offers low latency at the risk of returning stale data. • Strong consistency ensures that all copies of the data are up to date as soon as one copy (typically the master) is updated. Strong Consistency offers up-to-date data but at the cost of high latency. When copies of the data are distributed this also increases complexity. N-Phase commit strategies are typically employed to achieve this. |
Data Custodian |
Data Custodians serve as overall coordinators for the delivery of enterprise data efforts. They define data governance policies and advise data owners and custodians on the implementation of those policies. They work in collaboration with IT Specialists to implement data strategy and build shared understanding of the business. Data Custodians have a special role in the capture and storage of information into a data lake. Data custodians are the first point of reference for data in the enterprise and serve as the entry point to access data. They must ensure the proper documentation of data and facilitate their availability to their users, such as data scientists or project managers. Their communication skills enable them to identify the data customers, as well as to collect associated information in order to centralise them and perpetuate this knowledge within the enterprise. Data Custodians provide metadata; a structured set of information describing datasets. They transform abstract data into concrete assets for the business. Data Custodians are responsible for managing the quality of the data in their remit. |
Data Denormalisation |
Data denormalisation is a database optimisation technique that involves intentionally introducing redundancy into a database design to improve query performance and simplify data retrieval. It involves combining or duplicating data from multiple related tables into a single table, reducing the need for complex joins and improving query response times. Denormalisation can also be applied to data models in an integration architecture to make processing of incoming messages less complicated. |
Data Dictionary |
A data dictionary is a central repository or documentation that provides detailed descriptions and definitions of data elements, attributes, and entities within a database or system. It serves as a reference guide, documenting the structure, meaning, and usage of data elements, including their data types, formats, constraints, and relationships. The data dictionary helps users, developers, and administrators understand the data within a system, ensuring consistency, accuracy, and effective data management. It provides a common understanding of data terminology, facilitating data integration, data governance, and data analysis efforts. |
Data Engineer |
The Data Engineer role includes: · Building and testing scalable Big Data ecosystems for the business so that data scientists can run their algorithms on data systems that are stable and highly optimised; · Updating existing systems with newer or upgraded versions of the current technologies to improve the efficiency of the databases; · Determining how the components fit together; · Designing, developing, and implementing data pipelines; · Responsibility for ETL and ELT · Responsibility for data lifecycle management. |
Data Entity |
A data entity is a distinct unit of information that represents a real-world object, concept, or event within a data model. It serves as a fundamental building block of a database and encapsulates related attributes or properties. Data entities provide a structured and organized representation of data, enabling efficient data management, analysis, and retrieval. |
Data Fabric |
Data fabric refers to an architecture or framework that enables seamless integration, management, and access to distributed data across different sources and formats. It provides a unified and consistent view of data, regardless of its location or structure. |
Data Filtration |
Data Filtration is the set of tools and processes to clean source data ready for loading into a Data Lake or Operational Data Store. |
Data Flexibility |
Data flexibility refers to the ability of an organisation's data infrastructure and systems to adapt and accommodate changing business needs, evolving technologies, and emerging data sources. It encompasses the agility and scalability of data architecture, storage, and processing capabilities to effectively handle varying data types, volumes, and formats. |
Data Flow Diagram |
A data flow diagram (DFD) is a graphical representation that illustrates the flow of data within a system or process. It depicts the movement of data from its source to its destination, showing how data is input, processed, and outputted. DFDs use symbols to represent entities (sources, processes, and destinations) and data flows (arrows representing the movement of data). They provide a visual representation of data movement, highlighting the interactions between different components and helping in understanding the system's data flow and processing logic. DFDs are commonly used in system analysis and design to model and document data flow within an organisation. |
Data Governance |
Data Governance is a discipline that embodies a convergence of data quality, data management, data policies, business process management, and risk management surrounding the handling of data in an organisation |
Data Harmonisation |
Data harmonisation is the process of standardising and aligning data from multiple sources to ensure consistency, compatibility, and interoperability. It involves transforming and mapping data elements from different formats, structures, and systems into a unified and consistent format. |
Data Indexing |
A Data Index is a facility that allows data to be found quickly and easily no matter where it is located in an enterprise. Data Indexes are created by scanning repositories, databases, and file stores for datasets and documents. |
Data Ingest |
Data ingest refers to the process of capturing and importing data from various sources into a storage or processing system, such as a database, data lake, or analytics platform. It involves collecting data in its raw or semi-structured form from different sources, such as databases, files, APIs, sensors, or streaming platforms, and transferring it into a target system for further processing, analysis, or storage. Data ingest typically involves data validation, normalisation, and transformation to ensure data quality and compatibility with the target system. The goal of data ingest is to efficiently and reliably bring diverse data into a unified environment for further utilisation and analysis. |
Data Integrity |
Data integrity refers to the accuracy, consistency, and reliability of data throughout its lifecycle within an organisation. It ensures that data remains unaltered and retains its intended meaning and quality from the point of creation or capture to its storage, processing, and retrieval. |
Data Lake |
A Data Lake is usually a single store of data including raw copies of source system data, sensor data, social data etc., and transformed data used for tasks such as reporting, visualisation, advanced analytics, and machine learning. Data Lakes differ from data warehouses in their agility and flexibility: while data warehouses manage processed data, data lakes can store and analyse data that is entirely raw (unstructured). Because data lakes contain all of an organisation’s data in one location, they avoid the problem of data silos. See also Data Lakehouse. |
Data Lakehouse |
A Data Lakehouse combines the flexibility of a Data Lake with the efficiency of a Data Warehouse. |
Data Lifecycle |
The data lifecycle describes how data is created, ingested, stored, accessed, analysed, synthesized, and finally disposed of. |
Data Locality |
Data Locality is a software design pattern that moves computation to the data. With large datasets, it is often too expensive to move the data to an environment where it can be processed. Rather, processing is performed on the platform where the data resides, such as a database. For Big Data applications, this can be complicated since the data may be distributed across many physical hosts. |
Data Mart |
A data mart is the access layer of the data warehouse environment. It is used to present data to the data consumers. A data mart is a subset of the data warehouse that is usually oriented to a specific business line or team. |
Data Mesh |
Data Mesh is a decentralized approach to managing and scaling data within an organisation. It emphasises the concept of domain-oriented decentralised teams responsible for their own data products and services. In a Data Mesh architecture, data is treated as a product, and individual domain teams take ownership of specific data domains. These teams have the autonomy to manage, curate, and publish their own data products, leveraging modern data infrastructure and practices. |
Data Metric |
A data metric is a criterion for assessing data quality. It is normally applied against a specific attribute of a data entity. In Data Quality Management the Data Metric is used to establish an agreed target value, then subsequent data produced is tested against the target. Typical metrics cover: · Accuracy · Completeness · Consistency · Timeliness · Security |
Data Model |
A data model defines data entities (things of interest to the business), their attributes (properties that are important to know), and the relationships between them. Data models capture what the business deals with. |
Data Normalisation |
Data normalisation is a process in database design that organises and structures data to minimise redundancy and improve data integrity. It involves breaking down data into smaller, logically related tables and applying normalisation rules to ensure each table contains only unique and relevant information. By eliminating data duplication and inconsistencies, data normalisation promotes efficient storage, reduces update anomalies, and enables flexible querying and analysis of data. |
Data Owner |
Data owner – although the enterprise “owns” the data, accountabilities for working with the data are assigned to roles within the enterprise. Data owners are responsible for developing the standards for the storage, retention, and disposal of corporate information. The Business Capability Model and its alignment with the Common Data Model assist in identifying the most appropriate owners of data. Normally the owner of a business capability assumes ownership of the data within. The data owner also appoints appropriate data custodians as required to address the organisation’s needs. |
Data Packet |
A data packet is a method of transferring data by breaking it up into small chunks, called packets, and is how most data travels over the Internet |
Data Pipeline |
A data pipeline is a framework or process that enables the automated and orchestrated flow of data from various sources to a destination, typically a data warehouse, data lake, or analytics platform. It involves a series of steps and transformations that extract, transform, and load (ETL) data, ensuring its quality, integrity, and accessibility for analysis and decision-making. Data pipelines facilitate data integration, data cleansing, data enrichment, and data transformation activities, providing a streamlined and efficient way to ingest, process, and deliver data for downstream consumption. |
Data Professional |
Data Professional is a collective term for people who are involved with data. This includes data architects, data scientists, data engineers, data analysts, etc. |
Data Provenance |
Data provenance is a chain of evidence describing the origin, movement, and changes that a piece of data has experienced over time. Completeness of the provenance allows a data consumer to assess the accuracy and validity of the data, and therefore the level of trust they can place in it. Provenance is recorded as contextual metadata about the data. |
Data Quality Management |
Data Quality Management (DQM) is the proactive management of data collected and stored; quality refers to the data being ‘fit for purpose’ for the job it is required to do. DQM utilises data metrics to set agreed benchmarks, then test actual performance. |
Data Retention |
Data retention refers to the process and policies established by an organisation to determine the appropriate duration for retaining and storing data. It involves defining guidelines for how long different types of data should be retained, based on legal, regulatory, contractual, privacy, and business considerations. |
Data Scalability |
Data scalability refers to the ability of an organisation's data systems and infrastructure to efficiently handle growing data volumes, increased workloads, and expanding user demands without sacrificing performance or functionality. |
Data Scientist |
A data scientist combines data modelling, economics, statistics, analytics, and mathematical skills coupled with business acumen. Data scientists use data to solve problems for the business. They use statistical techniques, extrapolation, and other methods to find implicit data insights. Data scientists help the organisation to discover answers to primary questions that assist businesses to make sound decisions. |
Data Security |
Data security is the protection of an enterprise’s data from unauthorised access. |
Data Silo |
A data silo refers to a situation where data is stored and managed in isolated or separate systems or departments within an organisation, resulting in limited data sharing and collaboration. In this context, data becomes compartmentalised and inaccessible to other parts of the organisation, hindering data integration, consistency, and the ability to derive valuable insights. |
Data Staging Area |
A data staging area is a space within the Data Lake where loaded data is held prior to transformation into the relevant pool/pond. |
Data Steward |
A data steward is a person assigned the duty of managing the quality of specific data and/or information items. Data Stewards are subject matter experts in their respective data domains and consult with and support business staff in their day-to-day data management responsibilities. See also Data Custodian |
Data Strategy |
A Data Strategy is a broad, high-level plan for an enterprise’s data and how it can be used to meet the enterprise’s current and future business needs. |
Data Streaming |
Data streaming refers to the continuous and real-time flow of data from various sources to a destination or processing system. It involves the transmission of data in small, incremental units or events, rather than in large batches. |
Data Swamp |
A data swamp is a deteriorated and unmanaged data lake that is either inaccessible to its intended users or is providing little value. |
Data Transformation |
Data transformation refers to the process of translating information from one format to another. This could be between layers (i.e., a connector service translating between the Common Data Model and an application's data model), or between document types (i.e., a Purchase Order Request to a Sales Order). |
Data Warehouse |
Data warehouse is a generic term for a system used for storing, retrieving, and managing large amounts of any type of data. It contains recent snapshots of corporate data and is often remote from operational, transactional databases. Users can use this database without worrying about slowing down day-to-day operations of the production database. Data warehouse software often includes advanced filtering and sophisticated compression and hashing techniques for fast searches. Bill Inmon, the recognised father of the data warehousing concept, defines a data warehouse as a subject-orientated, integrated, time variant, non-volatile collection of data in support of management's decision-making processes (See: https://en.wikipedia.org/wiki/Bill_Inmon ). |
Data Workflow |
A data workflow refers to the sequence of steps or processes involved in the movement, transformation, and analysis of data within an organisation. It outlines the specific tasks, dependencies, and interactions between different data-related activities, such as data ingestion, data processing, data transformation, and data visualisation. Data workflows often include data pipelines, data transformations, data quality checks, and data storage mechanisms. They are designed to streamline and automate the flow of data, ensuring that data moves efficiently through various stages to ultimately provide valuable insights and support decision-making processes. |
Database |
A database is an organised collection of data. See also DBMS. |
Database Administrator (DBA) |
A Database Administrator (DBA) is an IT professional responsible for managing,configuring, and maintaining an organisation's database(s). DBAs ensure the secure, efficient, and reliable operation of databases by implementing security measures, optimising performance, performing backups and recovery, and resolving database-related issues. |
DataOps |
DataOps is a set of practices, processes, and technologies that combines an integrated and process-oriented perspective on data with automation to improve quality, speed, and collaboration in the area of data analytics. The aim of DataOps is to accelerate the quick delivery of high-quality data to data consumers. |
DBMS |
Data Base Management System (DBMS) is a software system that facilitates the creation, maintenance, and use of an electronic database. Examples include SQL Server, DB2, Oracle |
DDL |
Data Definition Language (DDL) is a standard for commands that define the different structures in a database. DDL statements create, modify, and remove database objects such as tables, indexes, and users e.g., CREATE, ALTER, and DROP. |
DDoS |
A Distributed Denial-of-Service (DDoS) attack is one in which a multitude of compromised systems attack a single target, thereby causing denial of service for users of the targeted system. The flood of incoming messages to the target system essentially forces it to shut down, thereby denying service to the system to legitimate users. |
Decision Logic |
Decision logic rules are the core of what is typically referred to as the business logic and have five types: • Mandatory Constraints – Rules that reject the attempted business transaction • Guidelines – Rules that do not reject the transaction but warn about an undesirable circumstance e.g., usually translates to warning messages • Action Enablers – Rules that test conditions and when finding them true, initiates another business event, message or other activity • Computations – Rules that create new information from existing information based on mathematical computation resulting in a piece of knowledge because it cannot simply be known • Inferences – Rules that create new information from existing information. Results in a piece of knowledge used as a new fact for the rule engine to consider. (From "Business Rules Applied", Barbara Von Halle; https://www.amazon.com.au/Business-Rules-Applied-Building-Approach/dp/0471412937/ref=sr_1_1) |
Deep Learning |
Deep Learning is an advanced use of neural networks made possible by large amounts of compute power due to the use of GPUs and distributed computing. |
Delphi Method |
Delphi Method is a structured communication technique, originally developed as a systematic, interactive forecasting method which relies on a panel of SMEs. The SMEs answer questionnaires in two or more rounds; after each round, a facilitator provides an anonymous summary of the SMEs forecasts from the previous round as well as the reasons they provided for their judgments. Thus, SMEs are encouraged to revise their earlier answers in light of the replies of other members of their panel. |
Descriptive Analytics |
Descriptive analytics is a branch of data analytics that focuses on understanding historical data and providing insights into what has happened in the past. It involves summarising, visualising, and interpreting data to gain a better understanding of patterns, trends, and relationships. |
Digital Certificate |
Digital certificates bind an identity to a pair of electronic keys that can be used to encrypt and sign digital information. A Digital Certificate makes it possible to verify someone's claim that they have the right to use a given key, helping to prevent people from using fake keys to impersonate other users. Used in conjunction with encryption, Digital certificates provide a more complete security solution, assuring the identity of all parties involved in a transaction |
Disk Array |
Disk array is a disk storage system which contains multiple disk drives |
Distributed Application |
A distributed application describes a multi-tiered, loosely-coupled set of software components in which the user interface, business logic, and data access code are separate services, usually mediated by an integration platform. See Microservice for an example; and compare Monolithic Application. |
Distributed Computing |
Distributed computing is a field in computing that uses networked computers to solve problems. The component computers in the system pass messages to each other to achieve their goal. A client-server computing architecture is an example of distributed computing. |
Diversity |
Diversity refers to the visible and invisible differences that exist between people including (but not limited to) disability, sex, sexual orientation, gender identity and intersex status, age, race, ethnicity, language, religion and belief, culture, physical impairment, and relationship and parental status. Diversity also encompasses the ways people differ in terms of their educational background, life and working experiences, skills, perspectives, carer responsibilities, socio-economic background, and geographical location. It includes diverse ways of thinking and working. |
DMN |
Decision Model and Notation (DMN) is a standard approach for describing and modelling repeatable decisions within organisations to ensure that decision models are interchangeable. The DMN standard supports decision management and business rules. The notation is designed to be readable by business and IT users alike. This enables teams to collaborate in defining a decision model. |
DMZ |
A demilitarised zone (DMZ) is a computer host or small network inserted as a ‘neutral zone’ between an organisation’s private network and the outside public network. It prevents outside users from getting direct access to a server that contains organisation data. |
DNS |
Domain Name System (DNS) is a hierarchical distributed naming system for computers, services or any resource connected to the Internet or a private network. It translates Internet domain and host names to IP addresses i.e., converts the name type in the Web browser address bar to the IP addresses of Web servers hosting those sites. |
DoS |
A Denial of Service (DoS) attack is intended to disable networked systems by blocking or severely limiting access to them. |
Dunning Letter |
A dunning letter is a notification sent to a customer, stating that the customer is overdue in paying an account receivable to the sender. |
EAI |
Enterprise Application Integration (EAI) allows an organisation to establish a technology infrastructure that seamlessly links its heterogeneous business applications, both packaged and home grown, into one unified system so that processes and data can be shared throughout the organisation. |
EBM |
Enterprise Business Message (EBM) defines messages that have contain one or more Enterprise Business Objects. EBMs describe the structure of input and output messages for all Enterprise Services, Business Services, and (when applicable) Enabler Services. Provider Connector services require EBMs as input and output messages and translate to/from system messages; Boundary services use any input/output message required but may transform to/from EBMs if calling Enterprise Services or Enabler Services. |
EBO |
Enterprise Business Object (EBO) describes Business Entities defined using a Canonical Data Format derived from the Common Data Model. |
ECM |
Enterprise Content Management (ECM) is an extension of content management to include a timeline for each content item, and having a process for content creation, approval, and distribution. |
EDI |
Electronic Data Interchange (EDI) is a method for transferring data between different computer systems or computer networks and is commonly used for ecommerce transactions |
EFT |
Electronic Funds Transfer (EFT) is the electronic exchange, transfer of money from one account to another, either within a single financial institution or across multiple institutions, through computer-based systems. |
ELT |
Extract, Load, Transform (ELT): In contrast to ETL, in ELT models the data is not transformed on entry to the data lake but instead is stored in its original raw format. This enables faster loading times. However, ELT requires sufficient processing power within the data processing engine to carry out the transformation on demand, to return the results in a timely manner. Since the data is not processed on entry to the data lake, the query and schema do not need to be defined a priori (although often the schema will be available during load since many data sources are extracts from databases or similar structured data systems and hence have an associated schema). ELT is a data pipeline model (See https://en.wikipedia.org/wiki/Extract,_load,_transform) Extraction: This first step involves copying data from the source system. Loading: During the loading step, the pipeline replicates data from the source into the target system, which might be a data warehouse or data lake. Transformation: Once the data is in the target system, organisations can run whatever transformations they need. Often organisations will transform raw data in different ways for use with different tools or business processes. |
Enabler Service |
Enabler services are service-based implementations of enterprise IT common routines. The centralisation of enterprise-wide IT functionality into a set of services ensures its consistency, improves flexibility and modifiability and reduces the total cost of ownership of IT |
Enterprise Architecture |
Enterprise Architecture is the alignment of business, people, and technology design so that together they deliver business intent |
Enterprise Data |
Enterprise data forms a permanent record within databases attached to operational business systems. This is in contrast to transient data that may be used in online functions that is either discarded or transformed into enterprise data in subsequent processing |
Enterprise Integration Platform |
The Enterprise Integration Platform is a logical or physical separation of the Integration Platform that contains only Enterprise and (optionally) Boundary Services. |
Enterprise Service |
An enterprise service is a service that exhibits the following additional characteristics: • Technology neutral thereby achieving endpoint platform independence; • Consumable enabling automated discovery and usage; • Contains no business logic directly only through orchestrating services; • Data requirements are expressed in terms of a common data model; • Re-useable i.e., reuse of the service not copying of the code/implementation; • Abstracted as its interface is separate from the implementation; • Published specification of the service from a functionality not implementation perspective; • Contract based which places obligations on the consumer and provider; • High order business concept as the functionality is at a level of granularity that is meaningful to the business. Gartner Group refers to these services as ‘composite services’ to reflect the fact that they are composed of a number of lower order services, which it refers to as ‘elemental service’ (See https://www.gartner.com/en/information-technology/glossary/composite-application-2 ). |
Entity Service |
An entity service is a reusable service with a business functional context related to one or more Business Entities. An entity service often exposes no more than simple Create, Read, Update and Delete (CRUD) functions. |
EOI |
Evidence of Identity (EOI) establishes a person’s identity and authorisation to act via different identification techniques such as date of birth, address, keyword verification etc. |
Epic |
In Agile software development an Epic is a significant piece of work spanning multiple product releases. It is analogous to a project and is decomposed into features and user stories |
Equity (Investment) |
Equity, typically referred to as shareholders' equity (or owners' equity for privately held companies), represents the amount of money that would be returned to a company’s shareholders if all the assets were liquidated, and all the company's debt was paid off in the case of liquidation. In the case of acquisition, it is the value of company sales minus any liabilities owed by the company not transferred with the sale. (From https://www.investopedia.com/terms/e/equity.asp ) |
Equity (Workplace) |
Equity refers to ensuring that everyone within the workplace is treated in a fair manner according to their individual needs and circumstances and adopting practices which provide everyone with equal opportunities to succeed at work. |
ERP |
Enterprise Resource Planning (ERP) is the integrated management of business processes, often in real time. ERP is a category of business management software that is used to collect, store, manage, and interpret data from business activities. |
ESB |
Enterprise Service Bus (ESB) provides an abstraction layer on top of a service layer that allows integration architects to exploit the value of services without writing code. Contrary to the more classical EAI approach of a monolithic stack in a hub and spoke architecture, the foundation of an ESB is built from base functions broken up into their constituent parts, with distributed deployment where needed, working in harmony as necessary. An ESB is typically implemented by technologies found in a category of middleware infrastructure products usually based on Web Services standards, that provides foundational services for more complex service-based architectures via an event driven and XML- or JSON-based messaging engine. |
ETL |
Extract, Transform, and Load (ETL) is a process that involves: • extracting data from outside sources; • transforming it to fit business needs, and; • loading it into the target database. ETL is most frequently used as the way data gets loaded into a data warehouse, but the term ETL can in fact refer to a process that loads any database. |
Event |
An event is a significant change of state. It can also refer to the notification of a change in state. The occurrence of an event is often accompanied by the creation of a message, notifying event consumers of the event. |
Event Consumer |
An Event Consumer performs actions based on the events that they consume. Typical actions include dashboard instrumentation (i.e., business activity monitoring); alerts (i.e., SLA thresholds that are close to being (or have been) exceeded) and triggering Business Processes. |
Event Processor |
An Event Processor performs actions on events. Example actions include enrichment, aggregation and correlation. |
Event Source |
An Event Source emits system or business events into the event processing system. Examples of event sources include IOT, SCADA, RFID sensors and actuators, integration platforms, applications, and system components |
Event-driven Architecture |
An Event-driven Architecture is a software architecture paradigm that is concerned with the production, detection, and management of events. Such an architecture typically consists of event emitters, event consumers, and event channels. |
Exception-based STP |
Exception-based Straight-Through-Processing (STP) means that even when it known at the design stage that not all steps can be automated, either initially or for the foreseeable future, the solution will be built as a STP solution but with invocations to the Work Management business capability for manual steps. |
Execution Context |
Execution Context refers to the environment in which a function executes. An integration platform is an example of an execution context. |
Feature |
In Agile software development, features represent concrete evolutions or significant additions to the product, bridging the project intent described in an Epic and the more detailed actions described by User Stories. Features are often scoped to the product release cycle. |
Flat File |
A flat file database describes any means to encode a database model, most commonly a table, as a single file. A flat file can be a plain text file or a binary file and there are usually no structural relationships between the records. |
Formative Testing |
Formative testing is conducted during the development of a product to mould or improve it with a test administrator and participant present. The outcomes from this testing are participant comments, attitudes, reasons for actions; photographs and videos; usability issues and suggested fixes. |
FTP |
File Transfer Protocol (FTP) is a standard network protocol used to transfer files from one host to another host over a TCP-based network, such as the Internet. |
Functional Service |
Functional services perform a calculation based on a provided set of input information. (Literally, y=f(x)) |
Generative AI |
Generative Artificial Intelligence (GenAI) is a field of study in Artificial Intelligence concerned with generating text, images, video, or other data using generative models. |
Generic Components |
Generic components represent reusable ‘building blocks’ from which applications and services are constructed. Unlike boundary components they are not designed for a specific application and do not know how they are consumed. |
Generic Service |
Generic service is a basic building block for integration architecture and represents the lowest level service containing specific business logic in the architecture. It is not an enabler or data integration service but is not of the correct granularity to be an enterprise service. It can be orchestrated by the service orchestration layer at which point this creates an Enterprise Service |
Glossary |
This is a glossary of terms used within an organisation. It should be populated initially from CDM Entity Definitions and BCM Capability definitions. In addition, there should be common taxonomies and subjects of interest in the Data Lake. |
Hard Dependency |
A hard dependency is a dependency between projects or project tasks that cannot be avoided or removed, and a delay in the dependency task/project WILL result in a delay in the dependent task/project. If there is no financially viable alternative but for the dependent project to wait, it is considered a hard dependency. If the dependent project can go ahead, albeit sub-optimally, then this is considered a soft dependency. |
Heritage Solution |
A heritage solution is an application from an organisation’s existing solution portfolio that will be retained, either entirely or in part, as part of its target state solution architecture. |
HTML |
Hypertext Markup Language (HTML) is the standard language used to create web pages |
HTTP |
The Hypertext Transfer Protocol (HTTP) provides a standard for Web browsers and servers to communicate. |
HVAC |
Heating, ventilation, and air conditioning, specifically as part of a building's infrastructure |
IaaS |
Infrastructure as a Service (IaaS) is one of three fundamental service models of cloud computing. It provides access to computing resource in a virtualised environment, i.e., the cloud, across a public connection, usually the Internet. For IaaS, the computing resource provided is specifically that of virtualised hardware or computing infrastructure. It also includes virtual server space, network connections, bandwidth, IP addresses and load balancers. Physically, the pool of hardware resource is pulled from a multitude of servers and networks usually distributed across numerous data centres, all of which the cloud provider is responsible for maintaining for which the customer has access. |
IAF |
Integrated Architecture Framework (IAF). The Fragile to Agile approach to architecture incorporates all the elements required for successful organisational change. The Integrated Architecture Framework encompasses the entire breadth (from strategy formulation to implementation) and depth (from the articulation of business intent to technology infrastructure design) of business change. It simplifies governance, investment prioritisation and adherence to strategic business intent. |
ICR |
Intelligent Character Recognition (ICR) is an advanced optical character recognition (OCR) or, more specifically, handwriting recognition system that allows fonts and different styles of handwriting to be learned by a computer during processing to improve accuracy and recognition levels. |
Idempotency |
Idempotency refers to a service that produces the same result no matter how many times it is called and has a significant impact on service robustness. |
Identity Management |
Identity management provides processes, policies, and technologies that enable the facilitation and management of users’ access to critical online applications and resources while protecting confidential personal and business information from unauthorised access. Identity management solutions are employed to administer user authentication, access rights, access restrictions, account profiles, passwords, and other attributes supportive of users' roles/profiles on one or more applications or systems. |
IIoT |
The Industrial Internet of Things (IIoT) is a subset of IoT devices related to industrial applications such as process and production control in manufacturing, energy management, etc. It is an evolution of locally situated and distributed control systems that uses cloud computing and internet protocols. |
Inclusion |
Inclusion refers to ensuring that employees and consultants enjoy equal opportunity without any barriers due to their differences. It is the extent to which the diverse mix of people are valued, respected, connected, progressing, and contributing to success. |
Information |
Information is data that has been given meaning by applying an interpretation. |
Information Hierarchy |
Information Hierarchy is the structure and priority given to various pieces of information or the structural and/or functional relationships between data, information, knowledge, and wisdom. It is used to help users choose where to direct their attention, know where to find things, and what types of information there are to find. |
Information Management Strategy |
An Information Management Strategy is a plan for how an enterprise will collect, use, and govern information. It will describe the current state of information management in the organisation, and state goals to be achieved to deliver better value to the business. The strategy will outline the tasks and actions needed to achieve the business goals. |
Information Security |
Information Security protects an organisation’s valuable assets, such as information, intellectual property, employees, and technology. With the selection and application of appropriate safeguards or controls, information security ensures an organisation can meet its business objectives by protecting its physical and financial resources, reputation, legal position, employees, and other tangible and intangible assets. |
Information Security Manual |
The Australian Cyber Security Centre (ACSC) within the Australian Signals Directorate (ASD) produces the Australian Government Information Security Manual (ISM). The purpose of the ISM is to outline a cyber security framework that organisations can apply, using their risk management framework, to protect their information and systems from cyber threats. |
Infrastructure |
Infrastructure refers to the hardware resources, operating system software, devices, and associated software components that host software applications. In general, the term is sometimes used to include networking components as well. The specialist use of infrastructure within the context of the Fragile to Agile Integrated Architecture Framework excludes networking which is addressed within the Communications Reference Architecture. |
Infrastructure Architecture |
Infrastructure Architecture forms part of the base documentation of the Fragile to Agile IAF. It describes the Strategic Infrastructure Architecture and details the core concepts and principles to be adhered to when implementing solutions. |
Infrastructure System Management |
Infrastructure System Management is a software capability to manage infrastructure resources. |
Integration Architecture |
Integration Architecture forms part of the base documentation of the Fragile to Agile IAF. It describes the Strategic Integration Architecture and details the core concepts and principles to be adhered to when integrating solutions. |
Integration Platform |
Integration platform is a generic term for the middleware used for orchestrating services, managing APIs, brokering service messages and/or handling events. The term covers ESBs, message brokers, message queues, API management etc. |
IoT |
Internet of Things (IoT) — the physical objects (or groups of such objects) that are embedded with sensors, processing ability, software, and other technologies that connect and exchange data with other devices and systems over the Internet or other communications networks. |
IoT Transaction Data |
IOT Transaction Data is notification data created by an event impacting on an asset resulting in a response by a sensor. Such events are triggered when an operating parameter that the sensor is programmed for is breached. The sensor may also trigger a transducer to take immediate corrective action. IOT transaction data is loaded into the IOT data staging area for transformation or conditioning to reduce ‘noise’ and volume and make it more useful. |
IP Address |
An IP Address, or Internet Protocol Address, is a numerical label assigned to each device (e.g., desktop, printer) participating in a computer network that uses the Internet protocol for communication. |
ISCM |
Information Security Continuous Monitoring (ISCM) is the practice of maintaining ongoing awareness of information security, vulnerabilities, and threats to support the organisation’s risk tolerance. Note: The terms “continuous” and “ongoing” in this context mean that security controls and organisational risks are assessed and analysed at a frequency sufficient to support risk-based security decisions to adequately protect the organisation's digital assets. |
ISDN |
Integrated Service Digital Network (ISDN) is a telephone-based network system that operates by a circuit switch or dedicated line |
ISO |
International Organisation for Standardisation (ISO) is the world’s largest developer of voluntary international standards which gives specifications for products, services and good practice. The standards are developed through global consensus and designed to break down barriers to international trade |
ISO 27001 |
ISO 27001 formally specifies a management system that is intended to bring information security under explicit management control. Being a formal specification means that it mandates specific requirements. Organisations that claim to have adopted ISO/IEC 27001 can therefore be formally audited and certified compliant with the standard. |
IT Service |
An IT Service comprises infrastructure, operating systems, software applications, and management processes that deliver the information and solutions required by the business and its customers. |
IT Service Catalogue |
An IT Service Catalog(ue) is a list of available technology resources and offerings within an organisation. It contains information about deliverables, prices, contact points, and processes for requesting an IT service. The catalogue usually has two perspectives, a customer facing view from which business users can browse and select services and a technical view that documents exactly what is required to deliver each service. |
IT Service Management |
IT Service Management (ITSM) is a process-based practice intended to align the delivery of information technology (IT) services with needs of the enterprise, emphasising benefits to customers. |
ITIL |
Information Technology Infrastructure Library (ITIL) is a framework of best practices intended to facilitate the delivery of high-quality IT services. ITIL outlines an extensive set of management procedures that are intended to support businesses in achieving both quality and value in IT operations. These procedures are supplier independent and have been developed to provide guidance across the breadth of IT infrastructure, development, and operations |
IVR |
Interactive Voice Response (IVR) is a technology that allows a computer to interact with customers through the use of voice and keypad inputs. IVR allows customers to interact with an organisation’s database via telephone keypad or speech recognition, after which they can service their own inquiries by following the IVR dialogue. IVR systems can respond with pre-recorded or dynamically generated audio to further direct users on how to proceed. IVR applications can be used to control almost any function where the interface can be broken down into a series of simple interactions. |
IWR |
Intelligent Word Recognition (IWR) is the recognition of unconstrained handwritten words. IWR recognises entire handwritten words or phrases instead of character-by-character, like its predecessor, Optical Character Recognition (OCR). IWR technology matches handwritten or printed words to a user-defined dictionary, significantly reducing character errors encountered in typical character-based recognition engines. New technology typically uses IWR, OCR, and ICR together, which allows for the processing of documents, either constrained (hand printed or machine printed) or unconstrained (freeform cursive). IWR also eliminates a large percentage of the manual data entry of handwritten documents that, in the past, could only be keyed by a human, creating an automated workflow. |
JEA |
Just Enough Administration (JEA) is a Microsoft security technology that enables delegated administration for anything managed by PowerShell. With JEA, you can: • Reduce the number of administrators on your machines using virtual accounts or group-managed service accounts to perform privileged actions on behalf of regular users. • Limit what users can do by specifying which cmdlets, functions, and external commands they can run. • Better understand what your users are doing with transcripts and logs that show you exactly which commands a user executed during their session. |
JIT |
Just-in-Time access enables users to request elevated access to a managed application's resources or virtual machines for troubleshooting or maintenance. By default, users have read-only access to resources, but for a specific time period a user can be granted elevated access. |
JSON |
JavaScript Object Notation (JSON) is a standard file format used for data interchange, particularly for Web Services. |
Kaizen |
Kaizen, also known as continuous improvement, is a long-term approach to work that systematically seeks to achieve small, incremental changes in processes in order to improve efficiency and quality. Kaizen can be applied to any kind of work but is best known for being used in lean manufacturing and lean programming. If a work environment practices Kaizen, continuous improvement is the responsibility of every worker, not just a selected few. |
Knowledge |
Knowledge is the theoretical or practical understanding of a subject. Knowledge is gained through the use of a collection of information in a particular context. |
KPI |
Key Performance Indicator (KPI) is a type of performance measurement; KPIs evaluate the success of an organisation or of a particular activity in which it engages. KPIs, in practical terms and for strategic development, are objectives to be targeted that will add the most value to the organisation. KPIs define a set of values against which to measure. These raw sets of values, which are fed to systems in charge of summarising the information, are called indicators. Indicators can be summarised into the following sub categories: • Quantitative indicators that can be presented with a number; • Qualitative indicators that cannot be presented as a number; • Leading indicators that can predict the outcome of a process; • Lagging indicators that present the success or failure post hoc; • Input indicators that measure the amount of resources consumed during the generation of the outcome; • Process indicators that represent the efficiency or productivity of process; • Output indicators that reflect the outcome or results of process activities; • Practical indicators that interface with existing processes; • Directional indicators specifying whether or not an organisation is achieving its targets; • Actionable indicators are within an organisation's control to effect change; • Financial indicators used in performance measurement and operating indices. |
LAN |
A Local Area Network (LAN) is a computer network that connects computers and devices in a limited geographical area such as home, school, computer laboratory or office building. The defining characteristics of LANs, in contrast to wide area networks (WANs), include usually higher data transfer rates, smaller geographic area, and lack of a need for leased telecommunication lines. |
Latency |
Latency refers to the time delay between data being sent from a source system and appearing in the target system. A high latency solution has a “human-noticeable” delay in the data arriving; a low latency solution does not have a noticeable delay under normal conditions. Low latency solutions are also sometimes referred to as near-real-time or pseudo-real-time solutions. |
Legacy Solution |
A legacy solution is an application from an organisation’s existing solution portfolio that will not be retained as part of its target state solution architecture. |
Machine Learning |
Machine learning (ML) is a field of Artificial Intelligence concerned with the development of algorithms and systems that can learn from data and generalise how to handle unseen data so that it can perform tasks without prior instruction. |
Malware |
Malware (short for malicious software) is software that is intended to damage, disrupt, disable computer systems, gather sensitive information, or gain access to private computers. It can appear in the form of executable code, scripts, active content, and other software. It is a general term used to refer to a variety of forms of hostile or intrusive software and may include viruses, worms, trojan horses, ransomware, spyware, adware, scareware, and other malicious programs. |
Master Data |
Master data describes the core business entities, and typically includes stakeholder, product, brand and service entities. Master Data Management (MDM) is the identification of the source for each item then the reuse of that data from the source. MDM embodies the Single Version of the Truth principle. The master data domain has high requirements for accuracy, completeness, consistency, timeliness, relevance, and trust. It has an enterprise-wide scope of integration – many applications reuse it from a centrally managed source. |
Message Schema |
A message schema is based on the canonical message format, forms the message component of a service interface and defines the inputs and outputs of service interfaces as serialised complex data types. |
Metadata |
Metadata is traditionally given the definition of “data about data”. The data it refers to can be structured or unstructured as both need to be discoverable in the appropriate context. The relevance of metadata should be reviewed by data professionals to ensure it is fit for purpose. As the metadata is primarily derived from the CDM and Glossary (which contains the taxonomy) it will be to some degree stable over time once the CDM and Glossary are correct and agreed across strategic business units (SBU) for non-operational data. |
Metadata Repository |
A Metadata Repository is a centralised store of metadata, allowing it to be managed and shared. The use of a repository enforces standards and drives connections between data, processes, and applications. The metadata repository is: Generic – Metadata is stored by capability area as opposed to being application, DBMS, or vendor specific; Integrated – it provides a view across the organisation within each subject area; Current and historical – contains past, current, and future metadata to reflect the current and future business and technical environment. Content includes: · Logical data models; · Definition of all the physical data structures existing in the enterprise including: · The structure, syntax, and owners of data that is received from source applications and sent to target applications or published; · Differences between the physical structures in use and the canonical data model (e.g., where a COTS application provides its own physical data structures); · Taxonomy of reference objects of interest; · How those physical data structures interact within the application integration solution domain; · Internal functional subcomponents for governance, development lifecycle integration, configuration, security, and metrics. |
MFA |
Multi-Factor Authentication (MFA) is an authentication method that requires the user to provide two or more different verification factors to gain access to a resource. Factors include: • something you know (e.g., password/personal identification number (PIN)); • something you have (e.g., cryptographic identification device, token); or • something you are (e.g., biometric). |
Microservice |
A microservice is a non-agnostic consumer service designed to support a single mission critical service or product. A microservice is not typically built for reuse beyond its initial use-case. Please refer to Microservice Architecture for more information (for example, see https://www.martinfowler.com/microservices/ and https://microservices.io/ ). |
Mission Thread |
A mission thread is a sequence of end-to-end activities and events that takes place to accomplish the execution of a capability across systems and platforms |
MLS |
Message Layer Security (MLS) is an application layer service that facilitates the protection of message information, ensuring that only the intended recipient can decrypt and read encrypted sections of a message and that the message has not been altered in any way during transit. |
Monolithic Application |
A monolithic application describes a single tiered, self-contained software application in which the user interface, business logic, and data access code are combined into a single program from a single platform. |
Monte Carlo Simulation |
Monte Carlo simulations are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results; typically, one runs simulations many times over in order to obtain the distribution of an unknown probabilistic entity. The name is derived from the technique of playing and recording results in a real gambling casino. |
MTTD |
Mean Time to Detect (MTTD, sometimes referred to as Mean Time to Discovery). This is the average time it takes you, or more likely a system, to realise that something has failed or occurred. |
MTTF |
Mean Time to Failure (MTTF) is a measure of the mean time between infrastructure component failures. A smaller MTTF implies the component is more likely to fail, decreasing its availability and reliability. Improving MTTF can be achieved by utilising more reliable components or introducing redundancy or fault tolerance such that failure of the component does not cause failure of operation. |
MTTR (Infrastructure) |
Mean Time to Restore (MTTR) is a measure of the time it takes for a system to be restored to an operational state. The longer a component takes to restore, the lower the availability. Improving MTTR can be achieved by utilising components that are more readily diagnosed under fault conditions (reducing the time to determine the cause of failure), by reducing ‘switchover’ time in case of a redundant component, or by establishing better support SLA’s (reducing the response and repair time). |
MTTR (Security) |
Mean Time to Respond (MTTR, sometimes written as Mean Time to Response) is the average time required to return a system to operational condition after receiving notification of a failure or cyberattack. |
Multi-channel Strategy |
A multi-channel strategy uses many different types of marketing, such as web sites, catalogues, telephone calls, mail, and television advertisements in order to obtain the business of a consumer. This type of strategy must have good systems of management in the supply chain, so as to furnish not only the appropriate goods but also prices and details across all of the different media. |
NCCP |
National Consumer Credit Protection Act (NCCP) is legislation designed to protect consumers and ensure ethical and professional standards in the finance industry through the National Credit Code (NCC). The act is regulated and enforced by ASIC. A major part of the act is that all lenders and mortgage brokers are required to hold a credit license or be registered as an authorised credit representative. |
Net Promoter Score |
Net Promoter Score (NPS) is a customer loyalty metric developed to measure the loyalty that exists between a service provider (e.g., organisation, employer, entity) and consumer. The service provider is the entity that is asking the NPS survey questions; the consumer is the customer, employee, or respondent to the survey. The NPS score can be as low as -100 (everybody is a detractor) or as high as +100 (everybody is a promoter). An NPS that is positive (i.e., higher than zero) is regarded as good, and an NPS of +50 is excellent. |
Network Protocol |
A network protocol defines rules and conventions for communication between network devices. Protocols for computer networking all generally use packet switching techniques to send and receive messages in the form of packets. |
Neural Network |
A neural network (NN) is a computational model inspired by the structure of animal brains. A NN consists of nodes, called artifical neurons, connected by edges. These model the neurons and synapses, respectively, in the brain. Each node sends and receives signals to other connected neurons. The signal sent is the result of a non-linear function and its strength is governed by a weighting. A NN must be trained on data to determine each node's weighting. |
NFC |
Near Field Communications (NFC) payment is a contactless payment method based on communication between devices such as smartphones or tablets. Contactless communication allows a user to wave a smartphone over an NFC compatible device to send information without needing to touch the devices together or go through multiple steps setting up a connection. |
NIST |
The National Institute of Standards and Technology (NIST), a unit of the U.S. Commerce Department. Formerly known as the National Bureau of Standards, NIST promotes and maintains measurement standards. It also has active programs for encouraging and assisting industry and science to develop and use these standards. The NIST Cybersecurity Framework provides a policy framework of computer security guidance for how private sector organisations in the United States and across the world can assess and improve their ability to prevent, detect, and respond to cyber-attacks. The framework has been translated to many languages and is used by the governments of Japan and Israel, among others. It "provides a high-level taxonomy of cybersecurity outcomes and a methodology to assess and manage those outcomes." |
Node |
A physical network node is an active electronic device that is attached to a network, and is capable of sending, receiving, or forwarding information over a communications channel |
Non-Repudiation |
Non-repudiation refers to a state of affairs where the purported maker of a statement will not be able to successfully challenge the validity of the statement or contract. The term is often seen in a legal setting wherein the authenticity of a signature is being challenged. In such an instance, the authenticity is being "repudiated". In reference to digital security, the meaning and application of non-repudiation means: • A service that provides proof of the integrity and origin of data; • An authentication that with high assurance can be asserted to be genuine. |
OCR |
Optical Character Recognition (OCR) is the mechanical or electronic conversion of scanned or photo images of typewritten or printed text into machine-encoded/computer-readable text. It is widely used as a form of data entry from some sort of original paper data source, whether passport documents, invoices, bank statement, receipts, business card, mail, or any number of printed records. It is a common method of digitising printed texts so that they can be electronically edited, searched, stored more compactly, displayed online, and used in machine processes such as machine translation, text-to-speech, key data extraction and text mining. OCR is a field of research in pattern recognition, artificial intelligence and computer vision. |
ODBC |
Open Database Connectivity (ODBC) is a standard programming language which provides a standard software interface (or translation layer) for accessing database management systems into a single program from a single platform |
OLA |
An Operational Level Agreement (OLA) is a contract that defines how various internal IT teams and specialists within an organisation plan to deliver a service or set of services to support a Service Level Agreement (SLA). The objective of an OLA is to present a clear, concise and measurable description of the IT service provider's internal support relationships. |
OLAP |
Online Analytical Processing (OLAP) is an approach to data analytics that uses multidimensional data stores to aggregate data for rapid access in reporting applications. OLAP databases are built upon hyper cubes, typically organised into individual data marts. |
OLTP |
Online Transaction Processing (OLTP) refers to a type of data processing that focuses on real-time transactional operations within a system. It involves capturing, processing, and managing individual transactions, such as inserting, updating, or retrieving records in a database. OLTP systems are designed to handle high volumes of concurrent transactions and provide fast response times, supporting day-to-day operational activities, such as online ordering, inventory management, and customer interactions. |
Omnichannel |
Omnichannel is a business strategy that aims to provide a seamless, effortless customer experience within and between contact channels. An omnichannel customer experience means that the customer can conduct a transaction with a business using any and all possible contact channels, switching between channels during the same transaction. |
On-premise |
On-premise refers to computing facilties that are provided by the organisation itself. An alternative is cloud computing. |
Operating Model |
Operating Model describes the organisation’s method of operating based on the desired level of business process integration and standardisation for delivering products to its customers. The CISR research group at the MIT Sloan School of Management (https://cisr.mit.edu/) devised the term to establish requirements for reusable core capabilities and guide IT investment decision governance. The Operating Model can also be used to drive architecture and infrastructure development ensuring that business needs are met with the right IT foundation. The resulting IT architecture can enable the organisation to be more agile. The four possible operating models are: • Coordination – require low business process standardisation but high business process integration; • Unification – require both high business process standardisation and high business process integration; • Diversification – require low business process standardisation and low business process integration; • Replication – require high business process standardisation but low business process integration. |
Operating System |
An operating system is software that manages computer hardware and software resources and provides common services for computer programs. The operating system is an essential component of the system software in a computer system. |
Operational Data Store |
An Operational Data Store (ODS) is a central database that is · Subject Oriented (Customer, Supplier, Shipment) · Integrated (an aggregate of detailed data from legacy systems. Aggregation could be in response to a specific use case) · Volatile (updated on a regular basis as the source changes, by exception) · Current values (up to date, no archived data) · Detailed (may be derived from data collected across sources – e.g., Collective Account Balance is the sum of all source Account Balances) · Utilised for collective operational decisions and immediate corporate information. |
Operational Integration Platform |
The Operational Integration Platform is a logical or physical separation of the Integration Platform that contains only Technical Services. |
Operational Technology |
Operational technology is hardware and software that detects or causes a change, through the direct monitoring and/or control of industrial equipment, assets, processes, and events. |
Orchestration |
Orchestration is the process of “conducting” the execution of multiple modules to create the desired result. It is directly analogous to the role of a conductor in a musical orchestra. Rather than have each player (module) have to understand the whole piece of music and to listen for when a previous player finishes his/her piece before they start this role is instead given to the conductor and each player only has to watch the conductor to be told when to play their piece (to be orchestrated). One of the consequences of separating business processes and business rules from the core underlying systems in a loosely coupled style is to encapsulate and expose the functionality of these systems in a modular fashion. This service-based approach to development designs and creates reusable services and combines these services according to the needs of a specific business process. This technique for combining these loosely coupled services to deliver business function is referred to as orchestration. |
PaaS |
Platform as a Service (PaaS) is a cloud computing services layer that executes on IaaS and provides additional components and frameworks for developing application services. |
PABX |
A Private Automatic Branch Exchange (PABX) is an automatic telephone switching system within an organisation. |
Pace Layering |
Pace-Layered Application Strategy is a methodology for categorizing, selecting, managing, and governing applications to support business change, differentiation, and innovation developed by Gartner (see https://www.gartner.com/en/information-technology/glossary/pace-layered-application-strategy ). It categorises solutions by the required pace of change for each layer to guide organisations when managing a software solution portfolio. The three categories are: • Systems of innovation include new applications that are built on an ad hoc basis to address new business requirements or opportunities. These are typically short life cycle projects (zero to 12 months) using departmental or outside resources and consumer-grade technologies. Alternatively known as systems of engagement. • Systems of differentiation include applications that enable unique company processes or industry-specific capabilities. They have a medium life cycle (one to three years), but need to be reconfigured frequently to accommodate changing business practices or customer requirements. • Systems of record are established packaged applications or legacy homegrown systems that support core transaction processing and manage the organisation's critical master data. The rate of change is low, because the processes are well-established and common to most organisations and often are subject to regulatory requirements. A system of record or source system of record is a data management term for an information storage system that is the authoritative data source for a given data element or piece of information. |
PCI DSS |
Payment Card Industry Data Security Standard (PCI DSS) is a proprietary information security standard for organisations that handle branded credit cards from the major card brands including Visa, MasterCard, American Express, Discover, and JCB. The standard, mandated by the card brands and run by the Payment Card Industry Security Standards Council, was created to increase controls around cardholder data to reduce credit card fraud via its exposure. |
Perimeter Defence |
Perimeter defence is one level of a protection suite to defend a network from external attacks. |
PII |
Personally identifiable information (PII) is information that, when used alone or with other relevant data, can identify an individual. Sensitive personally identifiable information can include your full name, taxation id, driver's license, financial information, and medical records. |
Pilot |
A pilot is a restricted, low risk implementation of a solution to test its design and solicit feedback from users and stakeholders regarding its operation which informs the subsequent deployment of the solution. |
PMM |
Project Management Methodology (PMM) is a defined combination of logically related practices, methods, and processes that determine how to plan, develop, control, and deliver a project throughout the continuous implementation process until successful completion and termination. |
PoI |
Proof of Identity (PoI) is a mechanism to verify and validate someone’s identity. |
Point-to-Point |
A point-to-point (P2P) interface connects one application directly with another, with no intermediary. This effectively couples those two solutions together. P2P integration is a primary cause of an Accidental Architecture, but there are situations in which they are a valid design choice. Some microservices may be implemented in a point-to-point fashion. |
Portfolio Management (Investment) |
Portfolio management is the art and science of selecting and overseeing a group of investments that meet the long-term financial objectives and risk tolerance of a client, a company, or an institution. (From https://www.investopedia.com/terms/p/portfoliomanagement.asp) |
Portfolio Management (Project Management) |
Portfolio management is the selection, prioritisation, and control of an organisation’s programmes and projects, in line with its strategic objectives and capacity to deliver. The goal is to balance the implementation of change initiatives and the maintenance of business-as-usual, while optimising return on investment. (From https://www.apm.org.uk/resources/what-is-project-management/what-is-portfolio-management/ ) Portfolio management ensures that an organisation can leverage its project selection and execution success. It refers to the centralised management of one or more project portfolios to achieve strategic objectives. Our research has shown that portfolio management is a way to bridge the gap between strategy and implementation. (From https://www.pmi.org/learning/featured-topics/portfolio ) |
Predictive Analytics |
Predictive analytics is a branch of data analytics that utilises historical data and statistical modeling techniques to make predictions or forecasts about future events or outcomes. It involves analysing patterns, relationships, and trends in data to identify potential future outcomes. |
Prescriptive Analytics |
Prescriptive analytics is an advanced branch of data analytics that leverages historical and real-time data, along with optimisation algorithms and mathematical models, to provide recommendations and insights on the best course of action to achieve desired outcomes. It goes beyond descriptive and predictive analytics by not only identifying what is likely to happen but also suggesting the actions to be taken. |
Presentation Logic |
Presentation logic is concerned with how information is displayed to, and captured from, users of the software e.g., the choice between a pop-up screen and a drop-down menu. The separation of business logic from presentation logic is an important concept for business design as it enables channel switching, consistency of outcome, and reduced unit cost for business and ICT. Types of logic permitted in a channel system include: • User interface display (windows, dialog boxes, etc.); • Monitoring and responding to each user interface event (e.g., enter key pressed, mouse click on a button, etc.); • Calls to services i.e., the invocation of services exposed by service containers; • UI controls, e.g., radio buttons; tick boxes etc.; • Navigation between different UI pages; • Validation of user-entered data; • Session control; • Activity tracking; • State management. |
Process |
A process is the third level within the Fragile to Agile process classification framework and the third level within the Fragile to Agile process hierarchy framework and is the junction point between both frameworks. As a process is owned by a single level one business capability, it consists of one or more sub-processes designed to produce a specific output for that capability. All sub-processes, activities within those sub-processes, and services it calls must be contained within its owning level one capability (with the exception that it is allowed to call enabler services and/or have tasks related to enabler capabilities.). With respect to the process hierarchy framework, a process must participate in at least one value stream but may participate in many. |
Process Affinity Analysis |
Process affinity analysis considers what processes are required to deliver a service/perform a function in order to determine the appropriate boundary of capabilities on a Business Capability Model. |
Process Category |
A process category is the highest level of classification in the Fragile to Agile Process Classification Framework and is solely used for business process cataloguing. |
Process Classification Framework |
A process classification framework is a taxonomy for business processes and provides a mechanism of cataloguing the processes thus enabling them to be searched and found. A process can only exist once within a classification framework. By way of contrast, a process can and ideally should, appear many times within a process hierarchy framework; once for every value stream in which the process participates. Equally, the location of the process in the classification framework says nothing about which value streams the process participates in and deliberately has no concept of a hierarchy of those processes. |
Process Hierarchy Framework |
A process hierarchy framework is a decomposition framework for business processes. Business processes are defined or modelled hierarchically so as to comprehend them easily. Processes are decomposable to multiple levels of granularity until they cannot be decomposed any further. Business Process Architecture usually starts with hierarchical process definitions up to a certain level of decomposition (value streams) and then business processes (processes) are represented as a flow detailing how work/task flows among business roles and gets accomplished. Processes are represented hierarchically so that one can understand the value streams, the subsequent processes that are part of the value stream which cannot be represented as flows e.g., “Originate a Mortgage” is a value stream which contains major processes such as ‘Capture Application Data. ‘Verify Client Identity’, ‘Assess Credit Risk’ and ‘Open Account’. These major processes can be, once again, decomposed to subsequent levels, sub-processes, wherein one can represent them as flows detailing the tasks that are performed by organisation roles. |
Process Logic |
Process logic is all of the logic related to driving a piece of work through the business in order to implement a business process. The key phrase is underlined, as it does not include all business logic that may need to be invoked as these relate to the actual execution of tasks in the process rather than the flow of the work itself. Process logic also consists of assumptions and other principles that underlie a business process design which determine the activities or events, how they are executed and in what sequence. An example of process logic is all business rules in the class of process flow rules (refer Business Rules description above). |
Process-Owning Capability |
A process-owning capability is the second level in the Fragile to Agile process classification framework and is used for business process cataloguing. The level one business capabilities from an organisation’s Business Capability Model will be the only valid entries for this level of the framework. Therefore, all processes will be assigned a level one capability as their owning capability. |
Program Management |
Program(me) management is the coordinated management of projects and business-as-usual activities to achieve beneficial change. A programme is a unique and transient strategic endeavour undertaken to achieve a beneficial change and incorporating a group of related projects and business-as-usual activities. (From https://www.apm.org.uk/resources/what-is-project-management/what-is-programme-management/ ) A program is a group of related projects managed in a coordinated manner to obtain benefits not available from managing them individually. Program management is the application of knowledge, skills, tools, and techniques to meet program requirements. Organisations with mature program management are far more successful than those without it, according to our research. (From https://www.pmi.org/learning/featured-topics/program ) |
Project |
A project consists of a concrete and organised effort motivated by a perceived opportunity when facing a problem, a need, a desire … It seeks the realisation of a unique and innovative deliverable, such as a product, a service, a process, or, in some cases, scientific research. Each project has a beginning and an end, and as such may be considered a closed dynamic system. It is bound by the triple constraints that are calendar, costs, and norms of quality, each of which can be determined and measured objectively along the project lifecycle. (From https://en.wikipedia.org/wiki/Project ) |
Project Management |
Project management is the application of processes, methods, skills, knowledge, and experience to achieve specific project objectives according to the project acceptance criteria within agreed parameters. Project management has final deliverables that are constrained to a finite timescale and budget. (From https://www.apm.org.uk/resources/what-is-project-management/ ) Project management is the use of specific knowledge, skills, tools, and techniques to deliver something of value to people. The development of software for an improved business process, the construction of a building, the relief effort after a natural disaster, the expansion of sales into a new geographic market—these are all examples of projects. (From https://www.pmi.org/about/learn-about-pmi/what-is-project-management ) |
Prompt Engineering |
Prompt engineering is the process of structuring an instruction or query to be interpreted by a generative AI model. A prompt is a natural language expression of the task that the AI must perform. |
Prototype |
Prototype is an early sample, model, or release of a product built to test a concept or process or to act as an object to be replicated or learned from. It is a term used in a variety of contexts, including semantics, design, electronics, and software programming. A prototype is designed to test and trial a new design to enhance precision by system analysts and users. Prototyping serves to provide specifications for a real, working system rather than a theoretical one. |
Proxy Service |
A proxy service acts as both a façade for the underlying service implementation and as a place to attach and enforce policies. It is the first entry point for any service consumer external to the execution context. A proxy service contains no service implementation logic and is used to implement the “decoupled contract” pattern. |
Publish-Subscribe |
Publish–subscribe (Pub-Sub) is a messaging pattern in which the message sender (publisher) sends the message to a class of topic. Receivers (subscribers) register their interest in that topic in order to receive those messages. Publishers have no knowledge of what receivers exist, and subscribers have no knowledge of the source of those messages. |
Quality of Experience |
Quality of Experience (QoE, QoX or QX) is a measure of a customer's experience with a service e.g., web browsing, phone call, TV broadcast, call to a contact centre) and focuses on the entire service experience; it is a more holistic evaluation than user experience (focused on a software interface) and customer support experience (support focused). QoE provides an assessment of human expectations, feelings, perceptions, cognition, and satisfaction with respect to a particular product, service, or application. |
Quality of Service |
Quality of Service (QoS) is a concept that transmission rates, error rates, and other characteristics can be measured, improved, and, to some extent, guaranteed in advance. QoS is of particular concern for the continuous transmission of bandwidth intensive video and multimedia information. |
RBAC |
Role Based Access Control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within an organisation. In this context, access is the ability of an individual user to perform a specific task, such as view, create, or modify a file. Roles are defined according to job competency, authority, and responsibility. When properly implemented, RBAC enables users to conduct a wide range of authorised tasks by dynamically regulating their actions according to flexible functions, relationships, and constraints. This is in contrast to conventional methods of access control, which grant or revoke user access on a rigid, object-by-object basis. In RBAC, roles can be easily created, changed, or discontinued as the needs of the organisation evolve without having to individually update the privileges for every user. |
Reactive Programming |
Reactive programming is a software development pattern that is an asynchronous, non-blocking, event-driven style. Logic flow is driven by the arrival of new data, rather than the execution thread. |
Reactive Streams |
Reactive streams are a specification or pattern supporting reactive programming by defining a standard for interoperability between reactive programming libraries. It describes itself as “…an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure.” (see http://www.reactive-streams.org/ ) |
Reactive Systems |
Reactive systems are a conceptual integration design pattern that defines an asynchronous, loosely coupled message-driven style that provides responsiveness, flexibility, and resilience. See the Reactive Manifesto for more detail, and related definitions: https://www.reactivemanifesto.org/ |
Recovery Point Objective |
Recovery Point Objective (RPO) is the maximum acceptable amount of data loss measured in time. It is the age of the files or data in backup storage required to resume normal operations if a computer system or network failure occurs. |
Recovery Time Objective |
A Recovery Time Objective (RTO) is a service level stating the duration of time within which a business process must be restored after a disruption in order to avoid unacceptable consequences associated with a break in business continuity |
Reference Architecture |
Reference architectures are a distillation of experience (good and bad) that are used to guide and constrain solution designs. They are unlikely to be implemented themselves, but they help solutions leverage good practice and avoid bad practice. Reference architectures: • define a common vocabulary for shared understanding • provide consistent and repeatable answers for solution designs via reusable models and patterns • can be used to validate a solution design, or compare competing designs • encourage compliance with guiding principles, standards and patterns. Reference architectures can be defined at multiple levels of abstraction, from the general to the specific, and can be used to describe different viewpoints or perspectives of a given problem. |
Referential Integrity |
Referential Integrity (RI) is a property of an information system that all data references are valid. That is, where a data entity refers to another entity, then the referenced entity exists. |
Relational Database |
A relational database is a type of database management system (DBMS) that organises and stores data in a structured manner using tables, columns, and relationships. It is based on the relational model, where data is organised into tables consisting of rows and columns, and the relationships between tables are established using keys. Relational databases provide a flexible and efficient way to store, retrieve, and manipulate structured data, enabling powerful querying, data integrity enforcement, and data consistency. |
RFx |
RFx refers collectively to the procurement sourcing terms such as RFI (Request for Information); RFP (Request for Proposal); RFQ (Request for Quotation); and RFT (Request for Tender). The complexity of the RFx process is determined by the completeness of the requirements, the number of suppliers that have been qualified, expected competition in the supplier base, inherent sourcing risk, and projected savings or cost avoidance opportunities. |
RI |
Referential Integrity (RI) in a relational database is consistency between coupled tables. Referential integrity is usually enforced by the combination of a primary key and a foreign key. For referential integrity to hold, any field in a table that is declared a foreign key can contain only values from a parent table's primary key field. For instance, deleting a record that contains a value referred to by a foreign key in another table would break referential integrity. |
RPA |
Robotic Process Automation (RPA) is a form of business process automation that allows the definition of a set of instructions for a robot (or ‘bot’) to perform. They are capable of mimicking most human-computer interactions to carry out error-free tasks, at high volume and speed. (See also: Screen Scraping) |
RPC |
Remote Procedure Call (RPC) is used in distributed computing to invoke a procedure (subroutine) that executes in a different address space (commonly on another computer on a shared network), which is coded as if it were a normal (local) procedure call, without the programmer explicitly coding the details for the remote interaction. |
RTGS |
Real Time Gross Settlement (RTGS) is a specialist funds transfer system where transfer of money or securities takes place from one bank to another on a ‘real time’ and on ‘gross’ basis. Settlement in ‘real time’ means the payment transaction is not subjected to any waiting period. The transactions are settled as soon as they are processed. ‘Gross settlement’ means the transaction is settled on one-to-one basis without bunching or netting with any other transaction. Once processed, payments are final and irrevocable |
SaaS |
Software as a Service (SaaS) is a software distribution model in which applications are hosted by a vendor or service provider and made available to customers over a network, typically the Internet. |
SBA |
Service Based Architecture (SBA) is the modularisation of information technology solutions to support a capability rather than application-based solution architecture. It organises software into discrete reusable components or building blocks which, in combination, can be adapted at a level of granularity relevant for the business via a single standard-based interface. As business circumstances change, the building blocks can be moved, or new ones attached, thereby providing the necessary agility for a business operating in a dynamic environment. |
SBU |
A Strategic Business Unit is a profit centre focussed on product offering and market segment. |
SBVR |
The Semantics of Business Vocabulary and Business Rules (SBVR) is an adopted standard of the Object Management Group (OMG) intended to be the basis for formal and detailed natural language declarative description of a complex entity, such as a business. SBVR is intended to formalise complex compliance rules, such as operational rules for an organisation, security policy, standard compliance, or regulatory compliance rules. |
SCA |
The Service Component Architecture (SCA) assembly model abstracts the implementation and allows assembly of components, with little implementation details. SCA enables you to represent business logic as reusable service components that can be easily integrated into any SCA-compliant application. The resulting application is known as an SOA composite application. The specification for the SCA standard is maintained by the Organization for the Advancement of Structured Information Standards (OASIS). |
Screen Popping |
Screen popping is a feature of computer telephony integration (CTI) applications that automatically display the relevant caller and account information on a call/contact centre agent's screen during a call. |
Screen Scraping |
Screen scraping is the process of collecting screen display data from one application and translating it so that another application can display it. This is normally done to capture data from a legacy application in order to display it using a more modern user interface (see also: RPA) |
SDLC |
Systems Development Lifecycle or Solution Delivery Lifecycle (SDLC) is composed of a number of clearly defined and distinct work phases which are used to plan for, design, build, test, and deliver information systems and enhancements. |
Segregation of Duties |
Segregation of duties (also known as separation of duties) is the concept of having more than one person required to complete a task. It is an internal control designed to prevent error and fraud by ensuring that at least two individuals are responsible for the separate parts of any task. Although it improves security, breaking tasks down into separate components can negatively impact business efficiency and increase costs, complexity and staffing requirements. For that reason, many organisations apply the concept to only the most vulnerable and mission critical elements of the business. |
SEO |
Search Engine Optimisation (SEO) is the process of improving the visibility of a website or web page in a search engine's natural or un-paid (organic) search results. |
Service |
Services are software chunks, or components, constructed so that they can be easily linked with other software components. |
Service Container |
Service containers are groupings of related business capabilities that define the scope of a business service offered by those capabilities and how they integrate or relate to each other. Service container boundaries define the granularity of process/service interaction to determine where the organisation gains flexibility and agility. They provide a consistent depth for process modelling and ensure services and components remain within the correct capability boundaries. They are used to define: • Process responsibilities; • Service level performance measurement; • Data stewardship responsibilities; • Integration points and their characteristics and requirements. The intention of service containers is to align process and service design with the desired Operating Model i.e., which capabilities will be standardised and which will not be standardised across a business; the desired level of outsourcing and maximising the level of reuse of services, thus reducing TCO. Where the service container boundaries reside in relation to the sets of business capabilities is determined by reviewing a number of business and technical factors, including but not limited to: • Outsourcing or multi sourcing requirements; • Desired process visibility; • Where a single interface to multiple external services is required; • Situations where sufficient data is not available internally; • Data and process affinity analysis. |
Service Contract |
A service contract describes the input required and output provided by a service, as well as the service endpoint. A service contract typically also includes non-functional requirements including availability, security and quality of service. |
Service Encapsulation |
Service encapsulation means to completely cover something especially so that it will not touch anything else. Solution logic can be encapsulated by a service so that it is positioned as an enterprise resource capable of functioning beyond the boundary for which it is initially delivered. |
Service Granularity |
Service granularity specifies the scope of business functionality and the structure of the message payload in a service operation that is provided within a service-based architecture (SBA). |
Service Model |
A service model is a model of the operations and messages provided by a service. It concentrates on the contract of enterprise services and generic services e.g., automated steps in a business process are realised by enterprise services which, in turn, are realised by generic services and components. |
Service Orchestration |
Service orchestration enables services to be strung together in predefined patterns and executed via orchestration scripts. Often the scripts describe the interaction between solutions by identifying and transforming messages, branching logic, and invocation sequences. The software that runs an orchestration script is called an orchestration engine and acts as a centralised authority to coordinate the interaction between services. |
Service Register |
A service register stores the metadata of Enterprise, Enabler and Generic Services. This register needs to be accessible from any part of the IT network either locally or through remote access. The exact details of what data about a service is required will be determined during logical design. |
Service Request |
A service request can vary depending on the organisation but is typically a request from a user for information, advice, a standard change, or a request for access to an IT Service e.g., password reset, network user access, desktop move etc. Service requests are usually handled by a Service Desk and do not require a Request for Change to be submitted. |
Service-Oriented Architecture |
A Service-Oriented Architecture (SOA) is an architectural style where functionality is provided using discrete services. In recent years the term has narrowed to refer to a particular style of service provision characterised by the formality of Web Service standards and an Enterprise Service Bus. We now use the term Service Based Architecture (SBA) to identify the more general solution pattern. |
Single Source of Truth |
Single Source of Truth (SSOT) refers to a centralised, authoritative data repository within an organisation that serves as the definitive and trusted source for accurate, up-to-date information. It eliminates inconsistencies and discrepancies by ensuring that all users and systems access and reference the same data source, promoting data integrity, consistency, and alignment across the organisation. |
SIP Trunking |
SIP Trunking is a Voice over Internet Protocol (VoIP) and streaming media service based on the Session Initiation Protocol (SIP). It provides the ability to combine data, voice and video in a single line, eliminating the need for separate physical media for each node. |
Six Sigma |
Six Sigma is a measure of quality that strives for near perfection. It is a disciplined, data driven approach and methodology for eliminating defects (driving toward six standard deviations between the mean and the nearest specification limit) in any process i.e., from manufacturing to transactional and from product to service |
Skunk Works |
Skunk works refers to those who work in semi-secret and exercise freedom from an organisation’s standard management constraints and routine procedures. The term is a registered trademark of Lockheed Martin, and is the official alias for Lockheed Martin's Advanced Development Programs |
SLA |
Service Level Agreement (SLA) provides an agreement between a service provider and the customer as to what constitutes acceptable service in quantifiable and measurable terms. It documents the mutually service objectives, how those objectives will be measured, and the schedule of distribution for the measurements. The intent of a Service Level Agreement (SLA) is to ensure the proper understanding and commitments are in place for effective support, measurement and resource planning in order to provide the service(s). |
SLR |
Service Level Requirement (SLR) describes a customer/business unit’s expectations for an IT service. |
SMTP |
Simple Mail Transfer Protocol (SMTP) is an Internet standard for electronic mail transmission across Internet Protocol (IP) networks. |
SNMP |
Simple Network Management Protocol (SNMP) is an Internet standard protocol for managing devices on IP networks. Devices that typically support SNMP include routers, switches, servers, workstations, printers, modem racks etc. It is used mostly in network management systems to monitor network-attached devices for conditions that warrant administrative attention. |
Snowflake |
Snowflake is a cloud-based data platform designed for modern data analytics. It provides a scalable and elastic data warehouse environment that allows organisations to store, manage, and analyse large volumes of data from multiple sources. Snowflake offers features such as instant scaling, automatic performance optimisation, and built-in support for structured and semi-structured data. It enables users to run complex queries, perform advanced analytics, and derive insights from their data in a fast and efficient manner. With its cloud-native architecture, Snowflake simplifies data management and enables organisations to leverage the power of data for driving business intelligence and data-driven decision-making. |
SOAP |
SOAP (Simple Object Access Protocol) allows a program running in one kind of operating system to communicate with a program in the same or different operating system by using HTTP and XML as the mechanisms for information exchange. |
SOAR |
Security Orchestration, Automation, and Response (SOAR) refers to a collection of software solutions and tools that allow organisations to streamline security operations in three key areas: threat and vulnerability management, incident response, and security operations automation. This helps to build automated processes to respond to low-level security events and standardise threat detection and remediation procedures. SOAR tools use security “playbooks” to automate and coordinate workflows (in a digital format) that may include any number of disparate security tools as well as human tasks. |
Soft Dependency |
Dependencies are used to classify project dependencies. If there is no financially viable alternative but for the dependent project to wait, it is considered a hard dependency. If the dependent project can go ahead albeit sub-optimally, it is considered a soft dependency. For soft dependencies, the description will detail what the implications will be e.g., re-work, if the dependent project went ahead without the project it is dependent upon having delivered what it needs. |
Solution Container |
A solution container represents an "in-use" service boundary. This is on contrast to service containers that are designed to represent an ideal state. As a significant percentage of an organisation’s technology solutions are acquired as COTS applications, the actual service boundaries “in use” will not match the “as-designed” boundaries for all applications. As service containers are matched to technology solutions, the “in use” service boundaries typically form a superset of service containers; these are known as solution containers. Solution container boundaries can be determined by mapping application footprints against the “as-designed” service container boundaries either for COTS application modules separately, or for the complete COTS implementation as one. Possible mappings are: • A solution container includes multiple service containers (the most common situation); • A solution container boundary matches a service container (likely for smaller, specialist applications or with fully service-enabled COTS packages); • A service container includes multiple solution containers (although rare, it may happen in specialised capabilities with high data/process affinities that are supported by multiple applications); • A service container splits across multiple solution containers (rare and problematic; indicates a potential technological risk). All services within a solution container will communicate with each other using the methods built into the application software. |
SSL |
Secure Sockets Layer (SSL) is a network protocol that uses strong encryption to secure network communications. |
SSO |
Single Sign-On (SSO) is a session and user authentication service that permits a user to use one set of login credentials to access multiple applications. |
Stateless |
Stateless means that there is no record of previous interactions (from a user, program, device etc.) and each interaction request has to be handled based entirely on information that comes with it. |
STP (Business) |
Straight-Through-Processing (STP) is the ability to execute a business process by computer from beginning to end without manual intervention at any of the stages. The goal of STP is the elimination of inefficiencies in business processes, such as the manual re-keying of data, faxing, paper mail, or unnecessary data batching. In business units with a relatively high volume of orders, the ultimate goal is to achieve near zero manual intervention in the processing of those orders. See also Exception-based STP. |
STP (Taxation) |
In Australia Single Touch Payroll is a Government facility for employers to report employee payments to the Australian Taxation Office (see https://www.ato.gov.au/businesses-and-organisations/hiring-and-paying-your-workers/single-touch-payroll ) |
Subject Area |
A Subject Area is a data model that captures some view of interest to the business. It is a collection of interrelated entities. Each subject area belongs to one and only one Business Capability in the Business Capability Model. The Common Data Model is the superset of all subject areas. |
Subject Matter Expert |
A Subject Matter Expert (SME) is an individual who possesses specialised knowledge, expertise, and experience in a particular subject or domain. SMEs are recognised as authorities in their field and provide valuable insights, guidance, and support to organisations and projects. They contribute their in-depth understanding and expertise to make informed decisions, provide recommendations, and solve complex problems related to their area of specialisation. SMEs play a critical role in driving innovation, ensuring accuracy, and maintaining high standards within their respective domains. |
Summative Testing |
Summative testing is conducted at the end of the solution development phase to measure or validate the usability of a product; compare it against competitor products or usability metrics; generate data to support marketing claims about usability. The outcome of these tests are statistical measures of usability e.g., success rate, average time to complete a task, number of assists etc. |
SWIFT |
Society for Worldwide Interbank Financial Telecommunication (SWIFT) provides a network that enables financial institutions worldwide to send and receive information about financial transactions in a secure, standardised and reliable environment. |
Swim Lane |
A swim lane is a visual element used in process flow diagrams, or flowcharts, that visually distinguishes job sharing and responsibilities for sub processes of a business process. The lanes are arranged either horizontally or vertically and used to visually group processes and decision points. Parallel lines divide the chart into lanes, with one lane for each person, group, or sub process. |
Synchronous (Communications) |
Synchronous communication is a pattern of direct communication where all parties involved in the communication are present at the same time e.g., telephone conversation, company board meeting, chat room event, and instant messaging. If one of the parties is not present the communication cannot occur. |
Synchronous (Integration) |
Synchronous in the integration context refers to a pattern of message or event processing that is temporally coupled with the request; the sender waits for the response while preventing subsequent processing (i.e., it is “blocking”). Increases the level of coupling, but may be necessary for real (or near real) time requirements, or for strong data consistency |
System Event |
A System Event is an event that is generated by a low-level components or system. System events will use system specific data formats rather than the common data model. The enterprise typically has no control over the structure or frequency of System events. |
System of Differentiation |
See Pace Layering |
System of Engagement |
Systems of engagement are decentralised IT components that incorporate technologies such as social media and the cloud to encourage and enable peer interaction, often contrasted with systems of record. (Based on a Geoffrey Moore whitepaper available at https://info.aiim.org/systems-of-engagement-and-the-future-of-enterprise-it ) Moore called systems of engagement the next step in the evolution of IT-enabled consumer experience and said it requires "empowering the middle of the enterprise to communicate and collaborate across business boundaries, global time zones and language and culture barriers, using next-generation IT applications and infrastructure adapted from the consumer space." (From https://searchcio.techtarget.com/definition/systems-of-engagement ) |
System of Innovation |
See Pace Layering |
System of Record |
See Pace Layering |
Task |
A task is the fundamental unit within the Fragile to Agile process hierarchy framework and cannot be decomposed any further. All tasks exist within one or more sub-process and are performed by a single individual at a finite point in time. Tasks have the following key characteristics: • A resource is responsible from task completion; • A resource undertakes the tasks; • A task is generally conducted within one organisational unit; • Tasks require a single competence and serve one function; • Lowest level of decomposition and decomposing any further is pointless; • An example is ‘Enter Customer Name onto System’ |
Task Service |
A Task Service is an Enterprise Service that is the target of a business process Task or Call Activity |
Taxonomy |
A taxonomy is a classification scheme for organising information and data into meaningful groups |
TCO |
Total Cost of Ownership (TCO) is a calculation to assess both direct and indirect costs and benefits related to the purchase of any IT asset or acquisition over its expected lifecycle with the intention to determine an effective cost of purchase. TCO analysis performs calculations on extended costs (or fully burdened costs) for any purchase e.g., for the purchase of a computer, the fully burdened costs may include the direct capital cost and indirect costs such as installation; service and maintenance support; networking; security; user training; and software licensing. The TCO has to be compared with the total benefits of ownership to determine the viability of the purchase. |
TCP |
TCP (Transmission Control Protocol) is a set of rules (protocol) used with the Internet Protocol (IP) to send data in the form of message units between computers over the Internet. While IP takes care of handling the actual delivery of the data, TCP makes sure that the individual units of data (called packets) that a message is divided into are delivered to the application in the same order that they were sent. |
TCP/IP |
TCP/IP is a two-layer program; the higher layer, Transmission Control Protocol, manages the assembling of a message or file into smaller packets that are transmitted over the Internet and received by a TCP layer that reassembles the packets into the original message. The lower layer, Internet Protocol, handles the address part of each packet to ensure it arrives at the right destination. Each gateway computer on the network checks this address to see where to forward the message. |
Test Harness |
A test harness or automated testing framework is a collection of software and test data configured to test a program unit by running it under varying conditions and monitoring its behaviour and outputs. |
TLS |
Transport Layer Security (TLS) is a network protocol that uses strong encryption to secure network communications. |
Tombstone Data |
Tombstone data refers to data that sits on a master file with little or no change during its lifecycle. |
Trojan Horse |
A Trojan horse, or trojan, is a non-self-replicating type of malware (or malicious code) which gains privileged access to an operating system whilst appearing to perform a desirable function but instead installs malicious code allowing unauthorised access. |
Two Factor Authentication |
Two factor authentication provides an unambiguous identification of users via a combination of two different components. These components may be something that the user knows (e.g., username, password), something that the user possesses (e.g., key or token) or something that is inseparable from the user (e.g., fingerprint, voice pattern). The use of two factor authentication to prove one's identity is based on the approach that both the required factors must be used and be correct. If one of the components is missing or used incorrectly, a person's identity cannot be established beyond doubt. |
UC |
An Underpinning Contract (UC) is a contract between an IT service provider and a third party; the third party provides goods or services that support the delivery of an IT service to a customer; the Underpinning Contract defines targets and responsibilities that are required to meet agreed service level targets in an SLA. |
UDDI |
Universal Description, Discovery and Integration (UDDI) standards are a platform independent directory service where businesses can register and search for Web Services. The framework is used to describe services, discovering businesses, and integrating business services by using the Internet. |
UI |
A User Interface (UI) is the means by which the user and a computer system interact, in particular the use of input devices and software. |
UI Patterns |
UI (User Interface) Patterns are general and specific techniques intended to solve common design problems |
UI Storyboard |
User Interface storyboards provide a way to express user scenarios intuitively and can be used to assist with the elicitation of detailed requirements and identify the number and complexity of screens and the navigation path. |
Understanding |
Understanding is the cognitive and analytical process that synthesises new knowledge from previously held knowledge. |
Unit of Work |
A unit of work consists of a number of actions that must all be completed successfully (or rolled back) to leave the business in a consistent state. See also ACID. |
UPS |
Uninterruptible Power Supply (UPS) is an electrical apparatus that provides emergency power to a load when the input power source or mains power fails. |
URI |
Uniform Resource Identifiers (URI) are a standard for identifying resources (typically documents) using a short string of numbers, letters, and symbols. They are defined by RFC 3986. |
URL |
Uniform Resource Locator (URL) refers to the subset of URIs that, in addition to identifying a resource, provide a means of locating the resource by describing its primary access mechanism (e.g., its network “location”). |
Usability |
Usability is the extent to which a product can be used by targeted users to achieve specified goals with effectiveness, efficiency, and satisfaction in a predefined context of use |
Usability Testing |
Usability testing is a technique used in user centred interaction design to evaluate a product by testing it on users. It focuses on measuring a product/service’s capacity to meet its intended purpose e.g., consumer products, web sites, web applications, computer interfaces, documents, and devices. Usability testing measures the usability, or ease of use, of a specific feature or set of features. |
Use Case |
A use case is a methodology used in system analysis to identify, clarify, and organise system requirements. It is made up of a set of sequences of interactions between systems and users in a particular environment related to a particular goal. It consists of a group of elements e.g., classes and interfaces that can be used together in a way that will have an effect larger than the sum of the separate elements combined. |
User Stories |
User stories are used in agile software development to capture a description of a software feature from an end user perspective. The user story describes the type of user, what they want, and why. A user story helps to create a simplified description of a functional requirement. User stories are often scoped to the length of sprints. |
Value Stream |
A value stream is the end-to-end process generating value to its intended stakeholders and can span multiple business capabilities. A value stream is defined by the Business Architecture Guild (and in the APICS Dictionary) as “the processes of creating, producing, and delivering a good or service to the market.” (See https://cdn.ymaws.com/www.businessarchitectureguild.org/resource/resmgr/docs/batobpmalignmentpositionpape.pdf ) A value stream may be internal to an organisation, or it may include external suppliers in addition to internal processes. The difference between a customer value journey and a value stream is that the organisation is responsible from the clients’ perspective for all the activities (process, sub-processes and tasks) performed as part of the value stream irrespective of whether or not it outsources some of them. A value stream is the second level within the Fragile to Agile process hierarchy framework and can contain several processes. Therefore, it can span multiple level one business capabilities (noting that an individual process cannot). An example of a value stream is “Originate a Mortgage” which consists of four business processes: Capture Application Data; Verify Client Identity; Assess Credit Risk and Open Account. |
View |
A View is described in ISO42010-2022 (https://standards.ieee.org/ieee/42010/6846/ ) as “A representation of a whole system from the perspective of a related set of concerns.” Essentially, it is a diagram, model, table, description, or some other artefact that describes an aspect of a system. |
Viewpoint |
A Viewpoint is described in ISO42010-2022 (https://standards.ieee.org/ieee/42010/6846/ ) as “A specification of the conventions for constructing and using a view.” Essentially, it is a pattern or template from which to develop corresponding views. |
Virtualisation |
Virtualisation is a mechanism by which physical resources such as memory, processing capacity, network connections, etc. are pooled, managed, contained, and allocated to clients as abstracted virtual resources. This has the advantage that such resources can readily be allocated to clients according to need and re-allocated as required without physical rearrangements. |
Visual Hierarchy |
Visual hierarchy refers to the arrangement or presentation of elements in a way that implies importance; it influences the order in which the human eye perceives what it sees, and the order is created by the visual contrast between forms in a field of perception |
VLAN |
A Virtual Local Area Network (VLAN) enables a single layer 2 network to be partitioned to create multiple distinct broadcast domains, which are mutually isolated so that data packets can only pass between them via one or more network routers. |
VoIP |
Voice over Internet Protocol (VoIP) is a methodology and group of technologies for the delivery of voice communications and multimedia sessions over Internet Protocol (IP) networks, such as the Internet. Other terms commonly associated with VoIP are IP telephony, Internet telephony, voice over broadband (VoBB), broadband telephony, IP communications, and broadband phone service. |
W3C |
The World Wide Web Consortium (W3C) is a standards setting organisation for WWW protocols. |
WADL |
Web Application Description Language (WADL) provides the API equivalent of a WSDL. |
WAN |
A Wide Area Network (WAN) supports communications between Local Area Networks (LAN). |
WCMS |
A Web Content Management System (WCMS) is a content management system specifically for Web content. |
Web Service |
A web service is an interface to a service that applies the WS-* set of standards and protocols to the service to make it interoperable with solutions both inside and outside the company irrespective of the platform of that solution |
Wiki |
A wiki is a website or database developed collaboratively by a community of users allowing any user to add and edit content. |
Wireframe |
A wireframe is an image or set of images which displays the functional elements of a website or page, typically used for planning a site's structure and functionality. |
Wisdom |
Wisdom is the ability to contemplate and act productively using knowledge and understanding |
WS-Security |
WS-Security (Web Services Security) is a proposed IT industry standard that addresses security when data is exchanged as part of a web service. WS-Security specifies enhancements to SOAP messaging aimed at protecting the integrity and confidentiality of a message and authenticating the sender. |
WSDL |
Web Services Description Language (WSDL) is an XML format for describing network services as a set of endpoints operating on messages containing either document-oriented or procedure-oriented information. The operations and messages are described abstractly, and then bound to a concrete network protocol and message format to define an endpoint. Related concrete endpoints are combined into abstract endpoints (services). WSDL is extensible to allow description of endpoints and their messages regardless of what message formats or network protocols are used to communicate. (See World Wide Web Consortium (W3C) definition https://www.w3.org/TR/2001/NOTE-wsdl-20010315 ) |
XML |
Extensible Mark-up Language (XML) is a mark-up language that defines a set of rules for encoding documents in a format that is both human and machine readable. |
XSD |
XML Schema Definition (XSD) specifies how to formally describe the elements in an Extensible Markup Language (XML) document. It can be used to verify each piece of item content in a document programmatically. |