Tuesday, 22 December 2015

Refactoring : Mechanical vs Conceptual


Folks in software industries are familiar with the term : Refactoring, which literally means changing the structure of code without changing the behaviour of it.

However this can be done in different levels, or with different perspectives.

One is based on detailed code inspection and mechanically concluding what needs to be changed, let's call it Mechanical Refactoring or Micro Refactoring.

For example, you see a piece of code has been repeated in different places, or if you move some function to another class or module so you can achieve the task on hand easier.

The other is based on evolution of domain knowledge. The initial naive, superficial model based on incomplete, shallow knowledge starts to evolve, as you discover new contours in the domain.

This happens when we learn more about the domain, entities, and concepts start to change shape the more we learn, we decide to add or remove entities, move some functionality to another class or module, some new modules start to exist and some others have no usage any more.

One is initiated by reviewing code while the other initiated by knowledge crunching.
It is also implied that by mechanical refactoring to get to a point that you will explore the desire of jumping to another dimension to make sense of domain problems.

Both of these are needed to develop a software of good quality.

"Often, though, continuous refactoring prepares the way for something less orderly. Each refinement of code and model gives developers a clearer view. This clarity creates the potential for a breakthrough of insights. A rush of change leads to a model that corresponds on a deeper level to the realities and priorities of the users. Versatility and explanatory power suddenly increase even as complexity evaporates." Eric Evans, DDD 

Wednesday, 23 September 2015

Model Driven Architecture Is Not Really An Architecture Methodology


The perception that people have when they hear the term 'SOFTWARE ARCHITECTURE' seems to cover a range of definitions, this is even obvious in job Ads.

Almost no two descriptions match, requiring different sort of skills, some more technically demanding and others leadership focused, and so on.

In turn the architectural design methodologies that are offered, try to solve different problems.

While researching about MDA, and reading a few books and articles, it seemed it is a way of designing a system, and has less to do with meeting the quality attributes along with functional requirements of a software.

It is about creating a model of the system in an abstract, platform ignorant way( Platform Independent Model, or PIM), possibly using a DSL(Domain Specific Language).

Models have levels (M1, M2,..) each lower model has more details added to it, this can be more business rules or more technical details (in memory call, remote call, database type, platform,..). 

Transformers are in charge of creating the lower level models form the higher level model, injecting more details, getting closer from abstract to concrete.

What seems to be missing is meeting the quality attributes.
Of course MDA does not stop you from achieving them, however it does not provide any guidelines to achieve this.


CONCLUSION:

Having in mind that design and architecture overlap,
MDA seems to be more of about design than architecture. It does not give you with guidelines  about how to achieve quality attributes. It focuses on models, model generators,... .

However, this claim, might be confusing, if we do not agree on a definition of 'SOFTWARE ARCHITECTURE'.

You can find SEI's definition of software architecture and a few more here.



Monday, 14 September 2015

Ubiquitous Langauge : The role A Common Langauge Plays In Development


Developers speak BIT, domain experts speak money, policy, rules.

Developers use their own language to communicate the technical related concepts and stories. They talk about
booleans, servers, process, asynchronous calls, ... .

Domain experts have limited or no understanding of this language, however they have
 a domain knowledge expressed through their own domain language. They talk about invoices, cargo, shipping, fees, ... .

 The problem arises when these two worlds need to communicate, a translation is needed.


To compensate for this deficiency, what usually happens is that a developer from the technical team learns the language of the domain experts (well, as good as he can), which is not an ideal situation, and acts and a translator.

Chain of translation show here:

Developer <-> Bilingual developer(s) <->  domain experts.


A very basic example, even a word can matter : recently I was working on a IPTV web project using the third party tool for reporting video consumption attributes.
The word 'BUFFERING', created confusion. And wasted a week. 

For them buffering meant, player run out of data so it cannot play anymore, to us it meant downloading the video stream, irrelevant of the playback status.

A project faces serious problems when its language is fractured. Domain experts use their jargon while technical team members have their own language tuned for discussing the domain in terms of design.
The terminology of day-to-day discussions is disconnected from the terminology embedded in the code (ultimately the most important product of a software project). And even the same person uses different language in speech and in writing, so that the most incisive expressions of the domain often emerge in a transient form that is never captured in the code or even in writing.
Translation blunts communication and makes knowledge crunching anemic.
Yet none of these dialects can be a common language because none serves all needs.

Use the model as the backbone of a language. Commit the team to exercising that language relentlessly in all communication within the team and in the code. Use the same language in diagrams, writing, and especially speech.
Iron out difficulties by experimenting with alternative expressions, which reflect alternative models. Then refactor the code, renaming classes, methods, and modules to conform to the new model. Resolve confusion over terms in conversation, in just the way we come to agree on the meaning of ordinary words.

Recognise that a change in the UBIQUITOUS LANGUAGE is a change to the model.
Domain experts should object to terms or structures that are awkward or inadequate to
convey domain understanding; developers should watch for ambiguity or inconsistency that will trip up design.

Also using this language, when modelling the domain, helps developer to express the the wisdom and concepts of the domain, in a clear way, in the code. 

Some excerpts taken from DOMAIN DRIVEN DESIGN, BY ERIC EVANS.

Sunday, 26 July 2015

Microservices Design




The term "Micro-service" is getting attention these days, some big names are using this style of design. I was introduced to it by looking at Netflix technical publications, which is similar to the industry that I work in, at the moment.


Currently there is no organisation that defines what exactly this design style is, however there seems to be some consensus around what "Micro-services Architecture" is.


Monolithic vs. Microservice

Comparison is usually a nice way for learning a new concept. So let's compare Micro-services Architecture to Monolithic.


A Monolithic application is single unit application. Think about a web application, HTML/Javascript client side, a web server, which receives client requests, and consolidated it's database to generate html and return to client. We want to focus on server side.


The server side application is a monolithic executable, a bulk. Updating the application means updating the executable and or database. If the application goes down, non of the features of this application are accessible. usually it is all written in the same programming language, and runs in the same process(, communication between modules is in the same process). And finally to scale horizontally you just need to run more instances of that application ( i.e.: load-balancers and more web servers).


When


Monolithic architecture can be successful, however in some situations the characteristics of this style of design may not be what you want, They are NOT so compatible with :
  • Horizontal scaling:  It may not mean having to scale all the features of your application. Only some areas of your application may need to be scaled and each by a different factor and at different times.
  • Different areas of your application may be written easier and with better performance using a different language or development paradigm(Nodejs, go, python, C++,..), and platform.
  • Development cycles need to be independent: A team has to wait for all others to finish(design, develop, test,... ) before they can deploy their work.
  • Organised around business capability rather than organisation's communication path ways..  For example instead of having a UI, Middleware, and database team for the whole application. You will have a team of each category around each service or business capability.
Characteristics

In Micro-services the flavour is around product mentality, rather than project. In this style there is an ongoing work in the software and the team is more focused on linking business capabilities  and providing more features for the customer. This is even extended to the fact that development team is responsible for deploying and maintaining the running instances.

It seems that micro-services are about smart endpoints and simple pipes. Consider restful http services, versus ESB(enterprise Service Bus)that can transform, apply business rules and orchestrate activities, using WS- distributed transaction protocols . In micro-services each service is conceptually decoupled and features are as cohesive as possible. In a more complicated case some sort of message queuing may be use, i.e.: RabbitQ.

Micro-services are Asynchronous, this increases performance, and respects the distributed mode considering congestions, failures,.. .

Data modelling and persistence is decentralised, meaning that each service models the world in their own relevant terms, and persist them in any format using the technology that makes the most sense for them, RDBMS, NoSql, SQL server, MongoDB, Cassandra,.. .

Automation and monitoring, are more advanced that monolithic applications.The pipeline that starts with building and unit testing and ends in deployment to production, needs to be automated. In Netflix they have the concept of bakery, where they have template instances, then they bake an service update into a machine, and it is released all with or without click of a button.
In a world that many services are running you need to know: about the statistics of your services, how they perform, track messages, and monitor the health of the services.

A consequence of using services as components, is that applications need to be designed so that they can tolerate the failure of services. Any service call could fail due to unavailability of the supplier, the client has to respond to this as gracefully as possible.
This is where some patterns have emerged : tolerant reader, throttling,.. .

There are some limited number of domain / contexts that each developer can have in mind work with, using micro-services is compatible with how many contexts  a developer can handle, and different skills they need to have. So with smaller teams and more focused there are people with more skills, and less communication overhead.
  
Service Granularity

There are disagreements about how many lines of code a service needs to be, however this seems to be irrelevant. What matters is the level of granularity we are prepared to introduce. This may relate to lines of code, but that's irrelevant to the idea of micro-services.
Please refer to the diagram on the top observing how some factors change as we move from and monolithic application towards having more and more services.

Modelling the right way is what need to be cared about. Domain Driven Design approach (bounded context) could be a useful approach in this style of design.


Challenges

In a way, with micro-services architecture some concepts will be pushed into runtime rather than development or build time. This raises issues if you are not prepared for them, specially after you have released to production.


However these can be mitigated by using some techniques and tools.


Versioning: Each micro-service can have it's own development and release cycle. Consider service A depends on Service B(version 1). What happens when Service B (version 2) is released? 

Contracts: To make it even harder, service development might be like silos, one team not knowing what the other ones are providing, how can Service A know what to expect service B? We are not talking about interface, but the "needs and provides services".


Testing: When testing, service A is going to mock service B, how do developers know what is the behaviour for service B that they need to mock?


Service Boundaries: Another issue is how to define service boundaries? a part of the answer is the service granularity, and the bigger part of the answer could be domain driven design(bounded context).


Transactions: The concept of a transaction which used to run in a process, will changes as we go distributed. It adds overhead to your services.


Monitoring, and tracking : you will need tools for monitoring the health of the services, to see how they are doing, to see which ones are failing.

A mature micro-service style application needs to have tools to spin new servers automatically or manually in a matter of few minutes if not seconds.

Summary


Micro-services are the trend these days, they stand against monolithic applications, they are another way of designing applications, with some pros and cons.


The amount of preparation you need to do to get ready for micro-services is considerable, continuous delivery,automation, testing strategy, deployment ... requires you to have a top shape product and supporting infrastructure.


Start simple( with a monolithic application), and break chunks off it bit by bit and move them to micro-services, however apply . You need to confirm that the business model works first, then try to move.


References: Martin Fowler, Netflix articles, Ryan Murray & John Napier.  

Thursday, 2 July 2015

Attribute Driven Design

So far I have talked about what tactics are, and how they fit into architectural design patterns and styles.
I encourage you to read Deriving ArchitecturalTactics: A Step Toward Methodical Architectural Design .

You have all the bits of knowledge, however you are going to need a methodical way of putting all this together and design the system. 
This is where Attribute Driven Design comes to the scene.

ATTRIBUTE DRIVEN DESIGN.

I tried using it, and in the beginning it was hard to follow, but you will get the hang of it eventually. This method is used to create an architecture down to a few levels of detail that satisfies the Quality Attributes of a system. It creates the main structures for the QAs.

Inputs include these architecturally significant requirements:

  • quality attribute requirements 
  • design constraints
  • functional requirements

Outputs include
  • first several levels of module decomposition
  • various other views of the system as appropriate
  • set of elements with assigned functionalities and the interactions among the elements 
 Steps to follow:
Functional requirements define what a system should do to meet stakeholder needs. For example :
 - Users should be able to view their account activity.
 - Users should be able to buy and sell goods.

 Design constraints are decisions about a system's design that must be incorporated the final design of a system. Examples:

 - Should use CouchDb as storage.
 - Should use http as a communication protocol.
 - System shall run on both Unix and Windows.

Quality attribute requirements are the  requirements that indicate the degree to which a system must exhibit various properties. For example:
 - The system must be build-able within six months.
 - The system shall process sensor input with in 1 sec.
 - The system shall allow unit tests to be performed within 3 hours with 85% path coverage.  

And don't forget all these can be implied in one another. For example:
 - "Given that Joe is the only resource available to manage persistence storage, and he only knows oracle", means system should use oracle.
-  "Given that market demand will increase dramatically in the next six months", means that the system must be build-able within 6 months.


Now Let's have a look at steps:

STEP 1:

In essence, you make sure that the system’s stakeholders have prioritised the requirements according to business and mission goals. You should also confirm that there is sufficient information about the quality attribute requirements to proceed.

STEP 2:

In this second step, you choose which element of the system will be the design focus in subsequent steps. You can arrive at this step in one of two ways: 

1. You reach Step 2 for the first time as part of a “greenfield” development. The only element you can decompose is the system itself. By default, all requirements are assigned to that system. 

2. You are refining a partially designed system and have visited Step 2 before.4 In this case, the system has been partitioned into two or more elements, and requirements have been assigned to those elements. You must choose one of these elements as the focus of subsequent steps. 
In the second case, you might choose the element based on risk and difficulty,business criteria, organisational criteria,.. .

STEP 3:

At this point, we have chosen and element of the system to decompose, and stakeholder's prioritised list of requirements that affect the element.
stakeholders have put High, Medium, Low next to each requirement, indicating how important it is to them. Then the architect will also put High,Medium,Low next to each requirement, indicating the potential impact of the requirement on the architecture.
Then you will have pairs of values for each requirements:

(H,H),(H,M),(M,H),(M,M),(L,H),(L,M),...

Just notice that further down the design, and after some analysis you may find out your assumptions need to be changed, so change it, and choose the drivers again.
five or Six candidates would be enough to go forward.

STEP 4:

At this point, we have chosen and element of the system to decompose, identified the candidate architectural drivers. Now we need to choose our design concept, which means choosing the major types of elements and the types of relationships among them.
Design concepts and QA requirements help you achieve this.

You can follow a methodical set of steps to derive this:

Identify the design concerns that are associated with the candidate architectural drivers. For example, for a quality attribute requirement regarding availability, the major design concerns might be fault prevention, fault detection, and fault recovery.

For each design concern, create a list of alternative patterns that address the concern.
identify each pattern’s discriminating parameters to help you choose among the patterns and tactics in the list. For example, in any restart pattern (e.g., warm restart, cold restart), the amount of time it takes for a restart is a discriminating parameter.
Select patterns from the list that you feel are most appropriate for satisfying the candidate architectural drivers. Record the rationale for your selections.
You can create a matrix of patterns pros, and cons, versus each architectural driver.
Choose which set of pattern, combinations or new patterns you want to use, and record your rational.

Review, evaluate, and refine.  

At this point:
 - You have decided on an overall design concept with major component type and relationship among them.
 - You have assigned some functionality to each element.
 - You have decided on type of relationship: remote call, local call,sync, async., .. .
 - The requirements of of the elements, data models.
 - 

STEP 5:

At this point, you instantiate the various types of software elements you chose in the previous step. Instantiated elements are assigned responsibilities according to their types; for example, in a Ping-Echo pattern, a ping-type element has ping responsibilities and an echo-type element has echo responsibilities. Responsibilities for instantiated elements are also derived from the functional requirements associated with candidate architectural drivers and the functional requirements associated with the parent element. At the end of Step 5, every functional requirement associated with the parent element must be represented by a sequence of responsibilities within the child elements.

STEP 6:

you define the services and properties required and provided by the software elements in our design. In ADD, these services and properties are referred to as the element’s interface. Note that an interface is not simply a list of operation signatures. Interfaces describe the PROVIDES and REQUIRES assumptions that software elements make about one another. An interface might include any of the following: 
 • syntax of operations (e.g., signature) 
 • semantics of operations (e.g., description, pre- and postconditions, restrictions) 
 • information exchanged (e.g., events signaled, global data) 
 • quality attribute requirements of individual elements or operations 
 • error handling.

Step 7:

You verify that the element decomposition thus far meets functional requirements, quality attribute requirements, and design constraints. You also prepare child elements for further decomposition. 

NEXT:

Once you have completed Steps 1–7, you have a decomposition of the parent element into child elements. Each child element is a collection of responsibilities, each having an interface description, functional requirements, quality attribute requirements, and design constraints. You can now return to the decomposition process in Step 2 where you select the next element to decompose. 

I have used SEI's material to write this post, if you are interested in more details on this method, their web-site has a couple of good articles on this.

Monday, 25 May 2015

Tactics and Architectural Patterns


So far we have learned about patterns and tactics.

Patterns are solutions that resolve multiple forces, whereas tactics focus on specific quality attributes. To more effectively apply both tactics and patterns, architects need to understand how architectural tactics and patterns relate and how to use them effectively.

Patterns are built from a collection of tactics realising some quality attributes and maybe affect some others.

Even different implementation of a pattern may use a different set of tactics.

Let's consider a pipe and filter architectural pattern: from modifiability tactics point of view it contains the following tactics:

  • Increase Cohesion / Maintain Semantic Coherence
  • Reduce Coupling   / Use Encapsulation
  • Reduce Coupling   / Use an Intermediary
  • Defer Binding Time/Use Start-Up Time Binding
 Now, using the modifiability tactics catalogue, can you find the tactics that exist in Layer pattern?
Answer: 

  • Increase Cohesion / Maintain Semantic Coherence
  • Increase Cohesion / Abstract Common Services
  • Reduce Coupling   / Use Encapsulation
  • Reduce Coupling   / Reduce Communication Paths
  • Reduce Coupling   / Use an Intermediary
  • Reduce Coupling   / Raise the Abstraction Level

Sunday, 17 May 2015

Achieving Quality Attributes - Security

Last week,my post box was compromised,while my new debit card was in the box.
The lucky guy(s) were using my card,when I received a call from the bank asking me if I have used my card that day or not. Eventually the card got cancelled.
This inspired me to pick security for this post, and use this as an example for the tactics involved.

Security Tactics


Tactics for achieving security can be divided into those concerned with resisting attacks, those concerned with detecting attacks, and those concerned with recovering from attacks.Using a familiar analogy, putting a lock on your door is a form of resisting an attack, having a motion sensor inside of your house is a form of detecting an attack, and having insurance is a form of recovering from an attack.



RESISTING ATTACKS

Authenticate users. Authentication is ensuring that a user or remote computer is actually who it purports to be. Passwords and digital certificates for example.

Authorise users. Authorisation is ensuring that an authenticated user has the rights to access and modify either data or services. This is usually managed by providing some access control patterns within a system.

Maintain data confidentiality. Data should be protected from unauthorised access. Confidentiality is usually achieved by applying some form of encryption to data and to communication links. SSL, public/private keys.

Maintain integrity. Data should be delivered as intended. It can have redundant information encoded in it, such as checksums or hash results, which can be encrypted either along with or independently from the original data.

Limit exposure. Attacks typically depend on exploiting a single weakness to attack all data and services on a host. The architect can design the allocation of services to hosts so that limited services are available on each host.

Limit access. Firewalls restrict access based on message source or destination port. Messages from unknown sources may be a form of an attack. It is not always possible to limit access to known sources. 

DETECTING ATTACKS

The detection of an attack is usually through an intrusion detection system. Such systems work by comparing network traffic patterns to a database. In the case of misuse detection, the traffic pattern is compared to historic patterns of known attacks. In the case of anomaly detection, the traffic pattern is compared to a historical baseline of itself. As an example I can refer you to my stolen card story mentioned above. Since the patterns and amount of usage was different from the rest,  the bank could detect this.

RECOVERING FROM ATTACKS

Tactics involved in recovering from an attack can be divided into those concerned with restoring state and those concerned with attacker identification.

The tactics used in restoring the system or data to a correct state overlap with those used for availability since they are both concerned with recovering a consistent state from an inconsistent state. One difference is that special attention is paid to maintaining redundant copies of system administrative data such as passwords, access control lists, domain name services, and user profile data.
The tactic for identifying an attacker is to maintain an audit trail. An audit trail is a copy of each transaction applied to the data in the system together with identifying information. Audit information can be used to trace the actions of an attacker, support nonrepudiation (it provides evidence that a particular request was made), and support system recovery. Audit trails are often attack targets themselves and therefore should be maintained in a trusted fashion.
For my case of stolen card. Bank cancelled the card.


If you are working on a project with some security measures implemented into it, see if you can identify tactics implemented. 

Monday, 11 May 2015

Achieving Quality Attributes - Modifiability


So far I have mostly written about quality attributes, documenting them using scenarios  and architect's goal in achieving them. An architect needs tools to tackle this. 

How does an architect do it?


The answer is there are different levels of action you can take to achieve desired qualities in your system. I will start from the most granular level, where you use the so called tactics.

A tactic is a fundamental design decision that influences the control of a quality attribute response (, do yo remember the QA scenarios we talked about earlier?).

Each tactic can be refined to other tactics, for example a tactic to achieve availability is redundancy, it can be refined to redundancy of data, and redundancy of computation.
However to implement this tactic we may need synchronisation(,to keep redundant copies in synch with original). This means a collection of tactics maybe used in some cases, which we call them patterns

In the remainder of this post I will examine tactics for achieving modifiability, as I find it to be easily understood by developers. And some more quality tactics in the next posts.

And please notice I wont cover all quality attribute tactics. Just enough to give you a sound understanding for full details I refer you to this book

Modifiability Tactics

The goal of the following tactics is to control the time and cost to implement, test, and deploy changes.

There are three main categories for these tactics which are explained in details in each section:

LOCALIZE MODIFICATIONS 

Generally speaking the less modules a change request affects, results in less cost. The goal of tactics in this set is to assign responsibilities to modules during design such that anticipated changes will be limited in scope :
  • Maintain semantic coherenceSemantic coherence refers to the relationships among responsibilities in a module. The goal is to ensure that all of these responsibilities work together without excessive reliance on other modules. Achievement of this goal comes from choosing responsibilities that have semantic coherence. Coupling and cohesion metrics are an attempt to measure semantic coherence, but they are missing the context of a change. Instead, semantic coherence should be measured against a set of anticipated changes. 
  • Anticipate expected changesConsidering the set of envisioned changes provides a way to evaluate a particular assignment of responsibilities. The basic question is "For each change, does the proposed decomposition limit the set of modules that need to be modified to accomplish it?" An associated question is "Do fundamentally different changes affect the same modules?" How is this different from semantic coherence? Assigning responsibilities based on semantic coherence assumes that expected changes will be semantically coherent. The tactic of anticipating expected changes does not concern itself with the coherence of a module's responsibilities but rather with minimising the effects of the changes. In reality this tactic is difficult to use by itself since it is not possible to anticipate all changes. For that reason, it is usually used in conjunction with semantic coherence.
  • Generalise the moduleMaking a module more general allows it to compute a broader range of functions based on input. The input can be thought of as defining a language for the module, which can be as simple as making constants input parameters or as complicated as implementing the module as an interpreter and making the input parameters be a program in the interpreter's language. The more general a module, the more likely that requested changes can be made by adjusting the input language rather than by modifying the module.
  • Limit possible options: This one is regarding product lines, which exceeds this blog's scope.

PREVENT RIPPLE EFFECTS

A ripple effect from a modification is the necessity of making changes to modules not directly affected by it. For instance, if module A is changed to accomplish a particular modification, then module B is changed only because of the change to module A. B has to be modified because it depends, in some sense, on A. 

There are different dependency types: syntactic, semantic, location, existence of, behaviour of,.. .
  • Hide information:Information hiding is the decomposition of the responsibilities for an entity (a system or some decomposition of a system) into smaller pieces and choosing which information to make private and which to make public. The public responsibilities are available through specified interfaces.
  • Maintain existing interfaces:If B depends on the name and signature of an interface of A, maintaining this interface and its syntax allows B to remain unchanged. Of course, this tactic will not necessarily work if B has a semantic dependency on A, since changes to the meaning of data and services are difficult to mask. Also, it is difficult to mask dependencies on quality of data or quality of service, resource usage, or resource ownership. Interface stability can also be achieved by separating the interface from the implementation.
  • Restrict communication paths: Restrict the modules with which a given module shares data. That is, reduce the number of modules that consume data produced by the given module and the number of modules that produce data consumed by it. This will reduce the ripple effect since data production/consumption introduces dependencies that cause ripples.
  • Use an intermediary: If B has any type of dependency on A other than semantic, it is possible to insert an intermediary between B and A that manages activities associated with the dependency. 
DEFER BINDING TIME

Tactics discussed so far minimise the number of modules that require changing to implement modifications. How about time to deploy and allowing non-developers to make changes?

Deferring binding time supports both of those scenarios at the cost of requiring additional infrastructure to support the late binding. We discuss tactics that affect deployment time.
Many tactics are intended to have impact at load-time or runtime, such as the following.
  • Runtime registration supports plug-and-play operation at the cost of additional overhead to manage the registration. Publish/subscribe registration, for example, can be implemented at either runtime or load time.
  • Configuration files are intended to set parameters at startup.
  • Polymorphism allows late binding of method calls.
  • Component replacement allows load time binding.
  • Adherence to defined protocols allows runtime binding of independent processes.

And finally the tactics distilled:


Thursday, 7 May 2015

Patterns, Design Patterns, Architectural Patterns


Before writing about how an architect can achieve system's quality attributes, I thought it would be a good idea to dedicate a post about what a pattern is, and what is the difference between design and architectural patterns, for the sake of speaking the same ubiquitous language.

 I understand there are various definitions for these terms, but let's agree on theses for the scope of this blog.

An architect of a software system designs its architectural structures to solve a variety of design problems. These structures are based on one or more patterns.

So what is a pattern?

A pattern describes a particular recurring design problem that arises in specific design contexts, and represents a well-proven solution for the problem.
The solution is specified in terms of describing the roles of its constituent participants, their responsibilities and relationships, and how they collaborate.

Let's have a look at an example, and have a better understanding of problem, context and solution.

An Example: LAYER PATTERN

Context :Regardless of the interactions and coupling between different elements of a      software system, there is a need to develop and evolve them independently. Without a clear and reasoned separation of concerns, element interactions cannot be supported and elements cannot be independently developed.

ProblemFinding a design that partitions the application into meaningful, tangible elements that can be developed and deployed independently while preserving the architectural vision and addressing concerns such as performance, scalability, maintainability, and comprehensibility.

Solution :Define one or more layers for the software with each layer having a distinct and specific responsibility. Layers define a partitioning of software functionality according to a (sub)system-wide property so that each group of functionality is clearly encapsulated and can evolve independently. Functionality can be partitioned along various dimensions including abstraction, granularity, hardware distance, and rate of change. Layers are associated with each other via a one-way “allowed-to-use” relationship.

The Scope and Abstraction Level 

Here is where many people get confused: what is a design pattern, what is an architectural pattern, how are the different? 

To be honest with you I don't see much value in answering these questions, because after all you just pick a pattern that you need and use it you don't care about what category it fits in.
However because patterns cover various ranges of scale and are applied at various levels of abstraction, it is sometimes useful to broadly classify them as

  • Architectural patterns - express a fundamental structural organisation schema for software systems. An architectural pattern provides a set of predefined [major architectural elements], specifies their responsibilities, and includes rules and guidelines for organising the relationships between them.
  • Design patterns - provide a scheme for refining the [major architectural elements] of a software system, or the relationships between them. A design pattern describes a commonly recurring structure of communicating [elements] that solves a general design problem within a particular context.
  • Idioms – are patterns specific to a programming language. An idiom describes how to implement particular aspects of components or the relationships between them using the features of the given language.

Note there is significant overlap and ambiguity in these definitions. Architectural design may incorporate any/all of the above types of patterns. 

If you are interested in patterns there is a very good book, I recommend:
 Pattern-Oriented Software Architecture which comes in five volumes.



Tuesday, 5 May 2015

Quality Attribute Workshop

So far you may have noticed the importance of quality attributes, and the fact that they are key influencing components in success of the designed software.

But how can you capture relevant quality attributes? what do they mean to stakeholders?
how can you tell the priorities ?

Software Engineering Institute suggests using a method called Quality Attribute Workshop,
which facilitates the process of capturing quality attributes and prioritising them.
It is recommended that the method should be used when there is no software yet(early stages),( however to my experience it can be done event after the software is developed and released, as it can help communication among stakeholders and point them to the right direction).

Depending on the size and number and nature of stakeholders you may need more than one workshop to capture required data. Usually it can take 1-2 days. Here are the steps to follow

QAW Steps
  1. QAW presentation and Introduction
  2. Business/Mission Presentation
  3. Architectural Plan presentation
  4. Identification of Architectural Drivers
  5. Scenario brainstorming
  6. Scenario Consolidation
  7. Scenario Prioritisation
  8. Scenario Refinement -> (1) Iterate as necessary with broader stakeholder community
    For details of how each step works refer to an article by SEI. 

 The benefits of QAW exceeds the time and energy you put into it. 
 Here are some :
  • Increased stakeholder communication (Hey developer remember you are in human world not coding)
  • clarified quality attribute requirements( using scenarios)
  • informed basis for architectural decisions
The out come of QAW is a set of prioritised attribute  scenarios that can be used by the architect, create prototypes and refine requirements into more details.

I highly recommend delving more into this method and mastering it.

Also if you are running a QAW session, you may need a team for capturing requirements, help stakeholders with writing scenarios, stopping lengthy discussions between stakeholders, identifying key quality attributes,.. .

At the time I learned about this, I could not run a full fledged QAW because of my position, however I started by talking to my manager trying to find out what quality attributes he was concerned about, and helped him to express it in a scenario. And then increased the circle of people I talked to.

As an exercise try talking to somebody who has a different role from you, and see what quality attributes they are concerned about. 
       

Wednesday, 29 April 2015

Understanding Quality Attributes

Quality attributes are properties of a system by which a stakeholder will evaluate their quality. In simple words, it is about how good a system performs its functional attributes.

Examples: performance, security, availability, modifiability, usability,.. .
Part of stakeholder concerns can be addressed by, and is translatable to quality attributes.

For example if a stakeholders states that he is looking to increase the market share of their product, it maybe translated to Modifiability, Usability.

Stakeholders of the system are the main source for gathering quality attribute requirements. 

The degree to which a software system meets its quality attribute requirements depends on its architecture. This is when the architect makes decisions during the design to satisfy quality attributes. Usually satisfying one quality attribute affects other ones. Almost all quality attributes stand against achieving performance. As a result an architect has to meet some tradeoffs in design the achieve the optimal quality attribute.

Considering the importance of quality attributes to an architect, we need a tool to gather and express and reach a consensus among stakeholders. There is a specific method for gathering, prioritising and expressing quality attribute called Quality Attribute Workshop which is of great help if you are looking to gather quality attributes.

It is worth mentioning that saying "this system should be modifiable, secure or highly available" is not of much value, we need to say the which area of the system should be modifiable against what changes, in other words, they should be measurable, and hence testable.

Also different people based on their different professional background understand various concepts under a quality attribute name.
A possible(,and proven ) solution to this dilemma is using Quality Attribute Scenarios.

Quality Attribute Scenarios

A quality attribute scenario is a short description of how a system is required to respond to some stimulus. In fact a scenario has six parts:

  1. source - an entity that generates the stimulus
  2. stimulus - a condition that affects the system
  3. artifact - the part of the system that was stimulated by the stimulus
  4. environment - the condition under which the stimulus occurred
  5. response - the activity that results because of the stimulus
  6. response measure - the measure by which the system's response will be evaluated 
Let's have a look at an example for modifiability:

A change request for updating a graph arrives, and it should be done, and deployed with in 4 hours.

Source: A customer
Stimulus: Change request
Artifact: code/ UI 
Environment: (Normal)
response: change code, and deploy
response measure: 4 hours

And here is an example for security:

An employee tries to change pay rate from a remote location within normal system operation status, system will keep an audit trail, and correct data will be restored after one day and tampering source is identified.

Can you try to identify the parts for the upper scenario?

Source: An employee
Stimulus: change the pay rate
Artifact: system data
Environment: Normal
response: audit train
response measure: restoring data, identification of source in a day.

General scenarios vs Concrete scenarios
So far all the scenarios we have seen are concrete, meaning they are pointing to a very specific, known story.
However depending on the domain and organisation or even your own experience you notice that some terms and actions start appearing and recurring in scenario parts.
You can take advantage of this fact and build a table and put those recurring and possible terms in a table for each part and quality attribute. This will help you and stakeholders to produce scenarios quicker and more fluently.
Testability General Scenario Generation
Portion of Scenario
Possible Values
Source
Unit developer
Increment integrator
System verifier
Client acceptance tester
System user
Stimulus
Analysis, architecture, design, class, subsystem integration completed; system delivered
Artifact
Piece of design, piece of code, complete application
Environment
At design time, at development time, at compile time, at deployment time
Response
Provides access to state values; provides computed values; prepares test environment
Response Measure
Percent executable statements executed
Probability of failure if fault exists
Time to perform tests
Length of longest dependency chain in a test
Length of time to prepare test environment

I think this a good time for an exercise, try to find out what QA s are important for the software you are developing or working with, and create a few scenarios. 


Other Quality attributes

There quality attributes of different nature that are also important for an architect.

Business Quality Attributes : these are concerned about cost, schedule, market. Time to market, Cost and benefit, expected lifetime and rollout schedule.
Architectural Quality attributes