Thursday 25 February 2016

Agile is not a silver bullet but another tool in the toolbox


I thought of writing this, since it seems many software experts are obsessed with agile and specifically scrum. Anywhere they go, they want to implement scrum, XP,.. .
Aside from the initial hype and the overly enthusiasts, it's time to have a look at statistical data collected.

Like many other cases in software we need to choose the best tool given the constraints and capacities,.. .
Agile methodologies have been out there for a while now, and data about their success or failure starts to emerge.

Agile was first developed by a number of software development experts who wrote "agile manifesto" having around 12 principals.Agile is in concept around self directed individuals, communication, customer satisfaction, and minimal documentation.

Since then a number of methodologies have come to exist.

In a book by Capers Jones (Software Engineering Best Practices), different software development methodologies, are compared in respect to different project sizes and metrics. 

First we need to establish a definition, to be able to intuitively understand the final results :

- Small project :  a project that has less than 10 team members, and lasts for a couple of     months to complete.*

- Medium project : a project that has a team of less than 10 members, and lasts around a   year, or couple of teams over several months.*

- Large project : a project with multiple teams , over multiple years.*

*Capers uses Function points to measure project size.
According to Capers Agile methodologies are in first ranked  in small projects.
For medium sized project Agile comes 2nd, however for large projects agile is not in the top 4.

The following table indicates the top rated (most successful) methodologies foreach development metric, for projects of size 1000 function points: 

1.
Development schedules
Extreme programming (XP)
2.
Development staffing
Agile/scrum (tied)
3.
Development effort
CMMI/5 spiral
4.
Development costs
CMMI/5 spiral
5.
Defect potentials
TSP
6.
Defect removal efficiency (DRE)
TSP
7.
Delivered defects
TSP
8.
High-severity defects
TSP
9.
Total Cost of Ownership (TCO)
TSP
10.
Cost of Quality (COQ)
TSP

Given this information, seems we have to look into some criteria to choose the best methodology.
According to Barry Boehm, we need to have a look at the following:

  • Is the organisation ready for change at any level to accomodate a methodology, not in words only but in action: i.e.: change space and seats,.. .
  • The characteristics of the product. i.e.: Are lives involved?
  • Is the development culture  ready(i.e.: pair programming, interpersonal issues, can they collaborate?)
It seems agile doesn't have a silver bullet, but yet another tool in the toolbox.

I encourage you to have a look at this article for more details.

Tuesday 22 December 2015

Refactoring : Mechanical vs Conceptual


Folks in software industries are familiar with the term : Refactoring, which literally means changing the structure of code without changing the behaviour of it.

However this can be done in different levels, or with different perspectives.

One is based on detailed code inspection and mechanically concluding what needs to be changed, let's call it Mechanical Refactoring or Micro Refactoring.

For example, you see a piece of code has been repeated in different places, or if you move some function to another class or module so you can achieve the task on hand easier.

The other is based on evolution of domain knowledge. The initial naive, superficial model based on incomplete, shallow knowledge starts to evolve, as you discover new contours in the domain.

This happens when we learn more about the domain, entities, and concepts start to change shape the more we learn, we decide to add or remove entities, move some functionality to another class or module, some new modules start to exist and some others have no usage any more.

One is initiated by reviewing code while the other initiated by knowledge crunching.
It is also implied that by mechanical refactoring to get to a point that you will explore the desire of jumping to another dimension to make sense of domain problems.

Both of these are needed to develop a software of good quality.

"Often, though, continuous refactoring prepares the way for something less orderly. Each refinement of code and model gives developers a clearer view. This clarity creates the potential for a breakthrough of insights. A rush of change leads to a model that corresponds on a deeper level to the realities and priorities of the users. Versatility and explanatory power suddenly increase even as complexity evaporates." Eric Evans, DDD 

Wednesday 23 September 2015

Model Driven Architecture Is Not Really An Architecture Methodology


The perception that people have when they hear the term 'SOFTWARE ARCHITECTURE' seems to cover a range of definitions, this is even obvious in job Ads.

Almost no two descriptions match, requiring different sort of skills, some more technically demanding and others leadership focused, and so on.

In turn the architectural design methodologies that are offered, try to solve different problems.

While researching about MDA, and reading a few books and articles, it seemed it is a way of designing a system, and has less to do with meeting the quality attributes along with functional requirements of a software.

It is about creating a model of the system in an abstract, platform ignorant way( Platform Independent Model, or PIM), possibly using a DSL(Domain Specific Language).

Models have levels (M1, M2,..) each lower model has more details added to it, this can be more business rules or more technical details (in memory call, remote call, database type, platform,..). 

Transformers are in charge of creating the lower level models form the higher level model, injecting more details, getting closer from abstract to concrete.

What seems to be missing is meeting the quality attributes.
Of course MDA does not stop you from achieving them, however it does not provide any guidelines to achieve this.


CONCLUSION:

Having in mind that design and architecture overlap,
MDA seems to be more of about design than architecture. It does not give you with guidelines  about how to achieve quality attributes. It focuses on models, model generators,... .

However, this claim, might be confusing, if we do not agree on a definition of 'SOFTWARE ARCHITECTURE'.

You can find SEI's definition of software architecture and a few more here.



Monday 14 September 2015

Ubiquitous Langauge : The role A Common Langauge Plays In Development


Developers speak BIT, domain experts speak money, policy, rules.

Developers use their own language to communicate the technical related concepts and stories. They talk about
booleans, servers, process, asynchronous calls, ... .

Domain experts have limited or no understanding of this language, however they have
 a domain knowledge expressed through their own domain language. They talk about invoices, cargo, shipping, fees, ... .

 The problem arises when these two worlds need to communicate, a translation is needed.


To compensate for this deficiency, what usually happens is that a developer from the technical team learns the language of the domain experts (well, as good as he can), which is not an ideal situation, and acts and a translator.

Chain of translation show here:

Developer <-> Bilingual developer(s) <->  domain experts.


A very basic example, even a word can matter : recently I was working on a IPTV web project using the third party tool for reporting video consumption attributes.
The word 'BUFFERING', created confusion. And wasted a week. 

For them buffering meant, player run out of data so it cannot play anymore, to us it meant downloading the video stream, irrelevant of the playback status.

A project faces serious problems when its language is fractured. Domain experts use their jargon while technical team members have their own language tuned for discussing the domain in terms of design.
The terminology of day-to-day discussions is disconnected from the terminology embedded in the code (ultimately the most important product of a software project). And even the same person uses different language in speech and in writing, so that the most incisive expressions of the domain often emerge in a transient form that is never captured in the code or even in writing.
Translation blunts communication and makes knowledge crunching anemic.
Yet none of these dialects can be a common language because none serves all needs.

Use the model as the backbone of a language. Commit the team to exercising that language relentlessly in all communication within the team and in the code. Use the same language in diagrams, writing, and especially speech.
Iron out difficulties by experimenting with alternative expressions, which reflect alternative models. Then refactor the code, renaming classes, methods, and modules to conform to the new model. Resolve confusion over terms in conversation, in just the way we come to agree on the meaning of ordinary words.

Recognise that a change in the UBIQUITOUS LANGUAGE is a change to the model.
Domain experts should object to terms or structures that are awkward or inadequate to
convey domain understanding; developers should watch for ambiguity or inconsistency that will trip up design.

Also using this language, when modelling the domain, helps developer to express the the wisdom and concepts of the domain, in a clear way, in the code. 

Some excerpts taken from DOMAIN DRIVEN DESIGN, BY ERIC EVANS.

Sunday 26 July 2015

Microservices Design




The term "Micro-service" is getting attention these days, some big names are using this style of design. I was introduced to it by looking at Netflix technical publications, which is similar to the industry that I work in, at the moment.


Currently there is no organisation that defines what exactly this design style is, however there seems to be some consensus around what "Micro-services Architecture" is.


Monolithic vs. Microservice

Comparison is usually a nice way for learning a new concept. So let's compare Micro-services Architecture to Monolithic.


A Monolithic application is single unit application. Think about a web application, HTML/Javascript client side, a web server, which receives client requests, and consolidated it's database to generate html and return to client. We want to focus on server side.


The server side application is a monolithic executable, a bulk. Updating the application means updating the executable and or database. If the application goes down, non of the features of this application are accessible. usually it is all written in the same programming language, and runs in the same process(, communication between modules is in the same process). And finally to scale horizontally you just need to run more instances of that application ( i.e.: load-balancers and more web servers).


When


Monolithic architecture can be successful, however in some situations the characteristics of this style of design may not be what you want, They are NOT so compatible with :
  • Horizontal scaling:  It may not mean having to scale all the features of your application. Only some areas of your application may need to be scaled and each by a different factor and at different times.
  • Different areas of your application may be written easier and with better performance using a different language or development paradigm(Nodejs, go, python, C++,..), and platform.
  • Development cycles need to be independent: A team has to wait for all others to finish(design, develop, test,... ) before they can deploy their work.
  • Organised around business capability rather than organisation's communication path ways..  For example instead of having a UI, Middleware, and database team for the whole application. You will have a team of each category around each service or business capability.
Characteristics

In Micro-services the flavour is around product mentality, rather than project. In this style there is an ongoing work in the software and the team is more focused on linking business capabilities  and providing more features for the customer. This is even extended to the fact that development team is responsible for deploying and maintaining the running instances.

It seems that micro-services are about smart endpoints and simple pipes. Consider restful http services, versus ESB(enterprise Service Bus)that can transform, apply business rules and orchestrate activities, using WS- distributed transaction protocols . In micro-services each service is conceptually decoupled and features are as cohesive as possible. In a more complicated case some sort of message queuing may be use, i.e.: RabbitQ.

Micro-services are Asynchronous, this increases performance, and respects the distributed mode considering congestions, failures,.. .

Data modelling and persistence is decentralised, meaning that each service models the world in their own relevant terms, and persist them in any format using the technology that makes the most sense for them, RDBMS, NoSql, SQL server, MongoDB, Cassandra,.. .

Automation and monitoring, are more advanced that monolithic applications.The pipeline that starts with building and unit testing and ends in deployment to production, needs to be automated. In Netflix they have the concept of bakery, where they have template instances, then they bake an service update into a machine, and it is released all with or without click of a button.
In a world that many services are running you need to know: about the statistics of your services, how they perform, track messages, and monitor the health of the services.

A consequence of using services as components, is that applications need to be designed so that they can tolerate the failure of services. Any service call could fail due to unavailability of the supplier, the client has to respond to this as gracefully as possible.
This is where some patterns have emerged : tolerant reader, throttling,.. .

There are some limited number of domain / contexts that each developer can have in mind work with, using micro-services is compatible with how many contexts  a developer can handle, and different skills they need to have. So with smaller teams and more focused there are people with more skills, and less communication overhead.
  
Service Granularity

There are disagreements about how many lines of code a service needs to be, however this seems to be irrelevant. What matters is the level of granularity we are prepared to introduce. This may relate to lines of code, but that's irrelevant to the idea of micro-services.
Please refer to the diagram on the top observing how some factors change as we move from and monolithic application towards having more and more services.

Modelling the right way is what need to be cared about. Domain Driven Design approach (bounded context) could be a useful approach in this style of design.


Challenges

In a way, with micro-services architecture some concepts will be pushed into runtime rather than development or build time. This raises issues if you are not prepared for them, specially after you have released to production.


However these can be mitigated by using some techniques and tools.


Versioning: Each micro-service can have it's own development and release cycle. Consider service A depends on Service B(version 1). What happens when Service B (version 2) is released? 

Contracts: To make it even harder, service development might be like silos, one team not knowing what the other ones are providing, how can Service A know what to expect service B? We are not talking about interface, but the "needs and provides services".


Testing: When testing, service A is going to mock service B, how do developers know what is the behaviour for service B that they need to mock?


Service Boundaries: Another issue is how to define service boundaries? a part of the answer is the service granularity, and the bigger part of the answer could be domain driven design(bounded context).


Transactions: The concept of a transaction which used to run in a process, will changes as we go distributed. It adds overhead to your services.


Monitoring, and tracking : you will need tools for monitoring the health of the services, to see how they are doing, to see which ones are failing.

A mature micro-service style application needs to have tools to spin new servers automatically or manually in a matter of few minutes if not seconds.

Summary


Micro-services are the trend these days, they stand against monolithic applications, they are another way of designing applications, with some pros and cons.


The amount of preparation you need to do to get ready for micro-services is considerable, continuous delivery,automation, testing strategy, deployment ... requires you to have a top shape product and supporting infrastructure.


Start simple( with a monolithic application), and break chunks off it bit by bit and move them to micro-services, however apply . You need to confirm that the business model works first, then try to move.


References: Martin Fowler, Netflix articles, Ryan Murray & John Napier.  

Thursday 2 July 2015

Attribute Driven Design

So far I have talked about what tactics are, and how they fit into architectural design patterns and styles.
I encourage you to read Deriving ArchitecturalTactics: A Step Toward Methodical Architectural Design .

You have all the bits of knowledge, however you are going to need a methodical way of putting all this together and design the system. 
This is where Attribute Driven Design comes to the scene.

ATTRIBUTE DRIVEN DESIGN.

I tried using it, and in the beginning it was hard to follow, but you will get the hang of it eventually. This method is used to create an architecture down to a few levels of detail that satisfies the Quality Attributes of a system. It creates the main structures for the QAs.

Inputs include these architecturally significant requirements:

  • quality attribute requirements 
  • design constraints
  • functional requirements

Outputs include
  • first several levels of module decomposition
  • various other views of the system as appropriate
  • set of elements with assigned functionalities and the interactions among the elements 
 Steps to follow:
Functional requirements define what a system should do to meet stakeholder needs. For example :
 - Users should be able to view their account activity.
 - Users should be able to buy and sell goods.

 Design constraints are decisions about a system's design that must be incorporated the final design of a system. Examples:

 - Should use CouchDb as storage.
 - Should use http as a communication protocol.
 - System shall run on both Unix and Windows.

Quality attribute requirements are the  requirements that indicate the degree to which a system must exhibit various properties. For example:
 - The system must be build-able within six months.
 - The system shall process sensor input with in 1 sec.
 - The system shall allow unit tests to be performed within 3 hours with 85% path coverage.  

And don't forget all these can be implied in one another. For example:
 - "Given that Joe is the only resource available to manage persistence storage, and he only knows oracle", means system should use oracle.
-  "Given that market demand will increase dramatically in the next six months", means that the system must be build-able within 6 months.


Now Let's have a look at steps:

STEP 1:

In essence, you make sure that the system’s stakeholders have prioritised the requirements according to business and mission goals. You should also confirm that there is sufficient information about the quality attribute requirements to proceed.

STEP 2:

In this second step, you choose which element of the system will be the design focus in subsequent steps. You can arrive at this step in one of two ways: 

1. You reach Step 2 for the first time as part of a “greenfield” development. The only element you can decompose is the system itself. By default, all requirements are assigned to that system. 

2. You are refining a partially designed system and have visited Step 2 before.4 In this case, the system has been partitioned into two or more elements, and requirements have been assigned to those elements. You must choose one of these elements as the focus of subsequent steps. 
In the second case, you might choose the element based on risk and difficulty,business criteria, organisational criteria,.. .

STEP 3:

At this point, we have chosen and element of the system to decompose, and stakeholder's prioritised list of requirements that affect the element.
stakeholders have put High, Medium, Low next to each requirement, indicating how important it is to them. Then the architect will also put High,Medium,Low next to each requirement, indicating the potential impact of the requirement on the architecture.
Then you will have pairs of values for each requirements:

(H,H),(H,M),(M,H),(M,M),(L,H),(L,M),...

Just notice that further down the design, and after some analysis you may find out your assumptions need to be changed, so change it, and choose the drivers again.
five or Six candidates would be enough to go forward.

STEP 4:

At this point, we have chosen and element of the system to decompose, identified the candidate architectural drivers. Now we need to choose our design concept, which means choosing the major types of elements and the types of relationships among them.
Design concepts and QA requirements help you achieve this.

You can follow a methodical set of steps to derive this:

Identify the design concerns that are associated with the candidate architectural drivers. For example, for a quality attribute requirement regarding availability, the major design concerns might be fault prevention, fault detection, and fault recovery.

For each design concern, create a list of alternative patterns that address the concern.
identify each pattern’s discriminating parameters to help you choose among the patterns and tactics in the list. For example, in any restart pattern (e.g., warm restart, cold restart), the amount of time it takes for a restart is a discriminating parameter.
Select patterns from the list that you feel are most appropriate for satisfying the candidate architectural drivers. Record the rationale for your selections.
You can create a matrix of patterns pros, and cons, versus each architectural driver.
Choose which set of pattern, combinations or new patterns you want to use, and record your rational.

Review, evaluate, and refine.  

At this point:
 - You have decided on an overall design concept with major component type and relationship among them.
 - You have assigned some functionality to each element.
 - You have decided on type of relationship: remote call, local call,sync, async., .. .
 - The requirements of of the elements, data models.
 - 

STEP 5:

At this point, you instantiate the various types of software elements you chose in the previous step. Instantiated elements are assigned responsibilities according to their types; for example, in a Ping-Echo pattern, a ping-type element has ping responsibilities and an echo-type element has echo responsibilities. Responsibilities for instantiated elements are also derived from the functional requirements associated with candidate architectural drivers and the functional requirements associated with the parent element. At the end of Step 5, every functional requirement associated with the parent element must be represented by a sequence of responsibilities within the child elements.

STEP 6:

you define the services and properties required and provided by the software elements in our design. In ADD, these services and properties are referred to as the element’s interface. Note that an interface is not simply a list of operation signatures. Interfaces describe the PROVIDES and REQUIRES assumptions that software elements make about one another. An interface might include any of the following: 
 • syntax of operations (e.g., signature) 
 • semantics of operations (e.g., description, pre- and postconditions, restrictions) 
 • information exchanged (e.g., events signaled, global data) 
 • quality attribute requirements of individual elements or operations 
 • error handling.

Step 7:

You verify that the element decomposition thus far meets functional requirements, quality attribute requirements, and design constraints. You also prepare child elements for further decomposition. 

NEXT:

Once you have completed Steps 1–7, you have a decomposition of the parent element into child elements. Each child element is a collection of responsibilities, each having an interface description, functional requirements, quality attribute requirements, and design constraints. You can now return to the decomposition process in Step 2 where you select the next element to decompose. 

I have used SEI's material to write this post, if you are interested in more details on this method, their web-site has a couple of good articles on this.

Monday 25 May 2015

Tactics and Architectural Patterns


So far we have learned about patterns and tactics.

Patterns are solutions that resolve multiple forces, whereas tactics focus on specific quality attributes. To more effectively apply both tactics and patterns, architects need to understand how architectural tactics and patterns relate and how to use them effectively.

Patterns are built from a collection of tactics realising some quality attributes and maybe affect some others.

Even different implementation of a pattern may use a different set of tactics.

Let's consider a pipe and filter architectural pattern: from modifiability tactics point of view it contains the following tactics:

  • Increase Cohesion / Maintain Semantic Coherence
  • Reduce Coupling   / Use Encapsulation
  • Reduce Coupling   / Use an Intermediary
  • Defer Binding Time/Use Start-Up Time Binding
 Now, using the modifiability tactics catalogue, can you find the tactics that exist in Layer pattern?
Answer: 

  • Increase Cohesion / Maintain Semantic Coherence
  • Increase Cohesion / Abstract Common Services
  • Reduce Coupling   / Use Encapsulation
  • Reduce Coupling   / Reduce Communication Paths
  • Reduce Coupling   / Use an Intermediary
  • Reduce Coupling   / Raise the Abstraction Level