Home
  Latest posts
  My Writings
  My Code
  My Gallery
  About me
 
  rssfeed Syndication
 
Bloggtoppen.se
 
 
Links
  Cornerstone
  SweNug
 
Post categories
  misc (48)
  Architecture (21)
  C# (19)
  Asp.Net (2)
  Vb.Net (2)
  Training (7)
  Data (19)
  Events (40)
  Platform (2)
  Orcas (4)
  Updates (3)
  Methods (10)
  Tools (6)
  Announcements (14)
  Languages (1)
  Patterns (6)
  Opinions (11)
  Fun (3)
  Ineta (1)
  Opinion (0)
  Practices (2)
  WCF (5)
 
 
 
Architecture (21)
Being agile about SOA Sunday, September 14, 2008

SOA and Agile are two approaches that have a hard time to live together. Martin Fowler just wrote about it: http://martinfowler.com/bliki/EvolutionarySOA.html, Jim Webber talked about it on Developer Summit and some other places (http://www.infoq.com/interviews/jim-webber-qcon-london, http://www.expertzone.se/dev07/) and I wrote a piece on it a couple of years ago: "Why SOA is broken" (https://lowendahl.net/showShout.aspx?id=116).

Yesterday we ALT.NET Sweden had an unconference and I facilitated a talk around agile and SOA. The feeling I got from that discussion is in line with what Martin mentions in his post:

"the fundamental question I ask is "is change predictable?" Only if change is predictable is a planned design approach valid. My sense is that if predictability is hard within the context of a single application, it's doubly hard across an enterprise."

The problems with most SOA efforts is that the bussiness- and SO architects often put to much comfort in that their process and message model is "done" and therefor won't need hands on after it left their hands. This is a smell in oh so many ways. First of all, when a bussniess says it's "done" with it's processes, it'll be a done bussiness. Bussiness evolve and change otherwise bussiness die, this was one of the reason for SOA in the first place; to allow for bussiness to change. Secondly, saying that you are done with something as complicated as bussiness processes is like saying that you and the work you do is perfect (if you just where a bit more humble of course ;). There will always be a need for change and that change will flow down to the team that builds the process automation tools, no doubt.

So as Martin concludes I would like to say +1. Bussiness evolve, bussiness processes evolve in an iterative and incrementel fashion, so why shouldn't a complex task as building automation for thoose processes have the possibility to evolve iterative and incrementel as well?

Leave a comment Comments (0)
 
A default architecture – without stored Procedures Friday, June 27, 2008

I'm aware that this is a very heated debate, I'm also aware that however I form this post there will always be opinions that are opposite from mine. But never the less I would like to write a little about why I put stored procedures on the substitutions bench and only call them in where something can't be solved in any other way.

To be clear; My position is not that stored procedures are "evil" or have no use at all, my position is that stored procedures usually forces me to abstain other technologies and techniques I value more. Furthermore, there are will always be scenarios where stored procedures is the preferred solution, I just don't believe in making them my default choice.

So let's get to it, first my main pain points:

Pain point #1: Stored Procedures lack proper version management

Sprocs don't version easily. There are a lot of challenges with version management and sprocs the database can only have exactly one version of the sproc at any given time. If I want to support two versions I need two separate sprocs. This might seem trivial but when you build multi-tenant systems where versions aren't just about bug fixes and new features, but slight nuances in the way they should behave for different tenants (customer Acme's business differs from customer Mechanics business), it becomes a bit more messy.

Additionally, for a long long time there hasn't been a good source control story around sprocs. That have change now with the new database role in Visual Studio 2008 Team System and the database project type (which rocks btw). Even though I like the database role I still feel that the story around code versions and version control isn't strong enough and therefor pains me.

 

Pain point #2: Stored procedures forces me to maintain two code bases

Having code in the database forces me to maintain two different code bases and maintain two different skill sets. It also moves me out of my environment with the tools I'm used to have like ReSharper, a proper debugging environment and all the other nice things I use when writing C# code. Maintaining two code bases also makes me less flexible, less agile if you will, and I just don't like it.

 

Pain point #3: T-SQL Doesn't express business logic well

I can't say I'm an expert on T-SQL, but I'm fairly good at it. My view on T-SQL is that it's really good at querying but for expressing business logic it lacks a lot of constructs that I get from a good solid OO language. I can't use well known patterns like the strategy pattern to vary calculations or other parts of the business logic, it's much harder to take other factors then pure data from the database in account when working out the logic. And frankly it is ugly and have very poor readability. Which in terms means that it will be much harder to maintain.

 

Pain point #4: Stored procedures are hard to test

It's impossible to test sprocs in isolation. It's also impossible to test sprocs without firing up a database. Testing with a database takes a lot of time and a lot of effort. Microsoft has, again, improved this area a bit with their Database role in Visual Studio 2008, but it's not really enough for me. My testing requirements go past the functionality I get from Visual Studio. Sure, for testing database interaction it works like a charm but for proper unit testing it just doesn't cut it. Also since sprocs often look like transaction scripts, they are usually these big things and it's very hard to test parts of the sproc in isolation, you have to test everything or nothing.

 

Pain point #5: Stored procedures forces me to write T-SQL for trivial tasks

Why should I have to write trivial T-SQL queries myself when I can have a tool do it for me? No doubt that if I'm really, really good at writing queries for my scenario and stick them into a sproc that I might get a little better performance then something auto-generated and generic. But is it worth the time you have to put in? I don't think it is. If I got a serious performance problem I will address it when it arises, not before. Premature optimization being the root to all evil and all that. I really believe that the gain I get in productivity is worth the loss in performance (if any; putting a query in a sproc doesn't guarantee better performance). It might even be that the tool generates better queries then those that I can write myself (and that probably goes for some of your queries as well ;)

 Edit  @ 16:13, forgot one :)

Pain point #6: Stored procedures are static

Stored procedures are not flexible around parameters and for every variation of parameters, joins, orders or other differences I might want; I need a separate sproc to support it. This means that if I want a query that is slightly different I need to copy and paste the sproc, change what differs and now maintain two similar sprocs, remember to change them both. Again, maintenance suffers.

So what do I suggest instead?

My last pain point kind of reveals where I'm going with this. I prefer dynamic SQL generated by a tool. I prefer it because it lacks the above pain points and makes me much more productive.

That's why my default architecture always starts out with an ORM and dynamic SQL if I need to talk to a database, it will generate all the trivial SQL, it will enable me to test in isolation, there will only be one code base to maintain, versioning works a lot better and all my favorite tools will help me write good and solid code.

Does that mean I don't use sprocs at all? Certainly not, but dynamic SQL is my default and I call sprocs in when I think I need them, which these days are very very rare.

The coming weeks I'll try to explain some of the misunderstandings around dynamic SQL in comparison to sprocs and also talk a little bit on particular solutions and how I go about them.

--

More information on ...

... visual studio team system database role:  http://msdn.microsoft.com/en-us/vsts2008/products/bb933747.aspx
... ReSharper: http://www.jetbrains.com
... Object Relational Mapping: http://en.wikipedia.org/wiki/Object-relational_mapping

... Transaction Script: http://www.martinfowler.com/eaaCatalog/transactionScript.html

Leave a comment Comments (14)
 
One Model to rule them all Monday, April 28, 2008

"One Model to rule them all, One Model to find them,
One Model to bring them all in to the darkness and bind them"

The quote comes from the book "The Lord of The Ring" and like the characters in that book many are they in the developer community that awaits "the prrrecious", but I fear that they will wait forever. I firmly believe that "The prrrecious" will never surface in this time and age.

The notion of a model is not new to the software industry; for ages models have described our software. With the models, a simple truth has always been present; every model excels, at its piece of the puzzle. As an example, different parts of UML describe separate aspects of software and solve unique description challenges.

Although always present, this truth seems only to be attached to the description of software, not the implementation of it.

In the Microsoft developer community the introduction of LINQ, LINQ to SQL and the Entity Framework raised a question that has been debated heavily since; "Should I use relational models or domain models to represent my data?"

Another and a bit older question that is also heavily debated is; "Should I use message models or domain models to represent my entities?"

The answers to both the questions are the same; there is no "prrrecious" model, so don?t wait around for it. Instead, follow these two simple design principles:

Use the model that solves the immediate challenge the best.

Make sure that the transitions between different models are as smooth as possible.

This will guarantee that for each piece of your implementation puzzle you will get the best suited model.

In my opinion the relational model (tables, rows, columns and normalizations) are by far the most efficient way of storing information and ensuring its integrity. Domain models are by far the most efficient way of processing data and applying business rules in code. Messages and services excel in creating and sharing loosely coupled and highly agile processes across the enterprise(s).

This makes the different models complement each other, not exclude one or the other.

So therefore I propose that we all chant this mantra:

"I will use the right model for each piece of the software puzzle, even for the implementation"

--
More on Domain Driven Design: http://www.domaindrivendesign.org/
More on Service Design: http://msdn2.microsoft.com/en-us/skyscrapr/ms954638.aspx

 

 

Leave a comment Comments (1)
 
Enterprise Architecture and Agile Software development - A Match made in Hell? Wednesday, April 23, 2008

- "How does the long-term strategy thinking from an Enterprise Architect match with a short-term iteration based agile developer?"

This is a question that often arises when agile methodology is discussed with enterprise architects and when discussing enterprise architecture with agile developers. At a quick glance it seems like the two isn't compatible, and when listening to some of the arguments from the both roles you could get convinced that it's actually the case.

But I beg to differ.


First let's examine what the collision points are and then I'll explain why I feel the two is indeed compatible.

Different views on architecture, the challenge!

The confusion around what architecture is and is not is complete and total. The definition of an architect and architecture are probably as many as there are people in the business of software. This creates an extreme challenge when communicating and trying to figure out what responsibilities different roles have in every project.

Combine this confusion with the agile statement that "architecture" is defined by the team currently implementing a feature who also limits the "architecture" to support that feature, nothing more and nothing less. It's quite easy to foresee the conflict promptly arriving.

On one side you will have an Enterprise Architect's view on "architecture" with a goal to create a long-term and solid "architecture" that will support the business.

On the other side you have the agile developer who focuses on giving business value one feature at a time; putting the "architecture" in the second room and lets it evolve with each iteration in a short-term fashion.

The lunch discussions are bound to be heated if this is the point of origin the both roles have.

 

Are they so different after all?

Now when we understand the different viewpoints, let's dig deeper into the definitions and look at what the roles brings into the term "Architecture".

In my experience, when an Enterprise Architect talks about architecture his main concern is the overall architectural strategy of the business she's supporting. Her concerns is usually about Data Management across the enterprise, Single-Sign on solutions for all existing systems, interoperability of all the systems in her care and many more enterprise wide decisions.

An agile developer's main concern is usually in how to implement the feature she's currently working on, being true to YAGNI and not over-engineer the solution. She's usually talking about where to host her code, how to store and access the information needed and what solution/design patterns she could benefit from using.

Are the concerns even remotely the same? I would argue that they are not.

I would argue that the use of the term "Architecture" from the agile developer's perspective is incorrect and that the two roles talk about different things.  A more correct definition would be that Enterprise Architect's think about architecture and Software developers think about software design.  Now, here's two terms that are compatible in deed;

 

Software Architecture and Software design.

They are two different responsibilities and two different sets of artifacts to be responsible for.  This is probably not news to many of you, but some architects and developers mix these two up. I've heard descriptions of Object Relation Mapping as being an architectural decision and whether or not to use an integration platform for integration as a design decision. This is one of the bases for the confusion and one of the problems when trying to mix Enterprise Architecture with Agile development practices.

 

Building a bridge

Even though a lot of architects might be the same person that takes decisions about how the software should be designed, for the industries sake, we need to be clear that the two are very different beasts and a person that are responsible for the decisions about both actually fulfills two separate roles in the project.

So Enterprise Architects; don't meddle with the code or design decisions!
Agile Developers; respect architecture decisions as incoming constraints on your projects.

Follow these two simple rules and Agile development methodology will live in peace with Enterprise Architecture even in the same universe.

Leave a comment Comments (0)
 
REST: The Web as a Resource Repository Sunday, April 13, 2008

Developer summit 2008 had a lot of different theme's, one of them was "web as a platform" with talks about S+S / SaaS, Rest, Internet Service Bus and ADO.NET Data Services which all in some way extends the web as we know it.

One of the talks that made a real impression on me was David Chapells second talk, SOAP/WS-* and REST:
Complementary Communication Styles.

REST vs SOAP or WEB vs Enterprise
David had some really great points to contribute to  the SOAP vs REST debate. His starting-point was that the REST and SOAP communities had trouble understanding each others needs. In his experience the bases for the two communities has different problems to solve. The SOAP community usually comes from the Enterprise-based solutions where there is very interesting enterprise challenges to solve and that is what SOAP is targeted at. The REST community on the other hand comes from web-solutions and challenges in that space. He points out that one of the proofs of that is the different views on security where enterprise grade security has demand for WS-Security (Certificates, Claims-based, Kerberos etc etc) and for solutions on the web it's usually sufficient with SSL and User Name / Password.

Verb vs Noun
Furthermore he spoke on the different approaches to services that the REST and SOAP people took. REST exposes data as resources on the web. With every piece of data having it's unique URL: Like http://www.contoso.com/Products/2/ for a Product with id 2. SOAP on the other hand exposes processes and actions. So it really is two different styles of architecture and two ways of organizing your services.

In conclusion
I do see a lot of scenarios where a REST based architecture might be useful, especially in scenarios where AJAX, Silverlight or other web technologies should work as the consumer of data. The feeling I get though is that the architecture encourages data-centric services where orchestration, workflows and syndication is handled by the consumers rather then the services. I've had loads of trouble with this approach in the past and therefor; I'm grateful for the insight that David and others gave me on Developer Summit, I now have another tool in my toolbox, but I still think that I will prefer SOAP for most of my coming scenarios.

 

More information about rest:

Wikipedia article: http://en.wikipedia.org/wiki/Representational_State_Transfer

Podcast from Thoughtworks with Jim Webber, Martin Fowler etc: http://www.thoughtworks.com/what-we-say/podcasts.html (shows #4 and #5)

Magnus Mårtensson's recap of David's talk: http://blog.noop.se/2008/04/11/David+Chappell+On+SOAP+Vs+REST.aspx

Leave a comment Comments (0)
 
Microsoft modeling more and coding less Thursday, February 07, 2008

The Oslo vision presented at TechEd didn't really impress me or strike me as innovative but as more details around the Oslo initiative is released I'm starting to wonder where Microsoft is heading. I'm all for making software development easier but with this new push and the new language called 'D', they are targeting 'non-developers'. In a statement, Don Box claims that they are not building a new C.A.S.E tool and by the looks of the tool he might be right. But when it comes to the ideas and visions behind the tools they are defiantly  the same as those behind C.A.S.E: "Building applications should be as easy as draw an image". I'm yet to see a tool that will fulfil that vision and I'm yet to see a project with non-developers building a successful application.

I'm all for making complex task simpler, but making complexity simplistic. That's bound to be a bigger problem then it solves.

Some more information and buzz around Oslo and D can be read here: http://blogs.zdnet.com/microsoft/?p=1159

Leave a comment Comments (0)
 
Principles and Patterns for Security Wednesday, November 07, 2007

I was really surprised when I saw this topic or rather who was to deliver it. I haven't seen Ron Jacobs as a security guy really but come think about it and after the session it made a lot of sense. Being an aware architect you need to care about security.

One of the most interesting practices that Ron showed was the notion of agile user stories in a BDD fashion for threat modeling where threath was written in the

As an attacker
I want to
So that
By

The example he gave was this:

As an attacker
I want to obtain credentials
So that I can plunder bank accounts
By tricking users into logging into my bogus site with a Phishing mail

This was really a refreshing take on threat modeling and much more in line with how I like to capture requirements. Ron also talked about which of these stories we really should put down time on and explained a model where the communication was about "what not to happen" rather then "how secure do you want to be" (well plenty secure thank you!) and how these became our security objectives.

The rest of the talk was about four main points that he wanted to make. None of them really mind blowing but really important:

  • Reduce the attack surface. Make sure there is as little as possible to attack. He talked about how reducing the attack surface in Windows Server 2003 had minimized the amount of critical security bugs to a minimum.
  • Defense in depth, the importance of having multiple redundant and layered safety system so that if one failed there would be yet another one in the way. Again he talked about W2K3 and how that everything was turned off by default, even down to the littlest thing, and other features like buffer overrun detection had made a bug non crucial on w2k3 but really nasty on w2k.
  • Least privilege, don't run as local system. Duh! But obviously people haven't gotten it yet. Does it matter that Microsoft does huge investment in security if application developers just cancel them out?
  • Fail to secure mode. This goes together with least privilege. If you, despite all odds, get a problem; make sure it's contained and only hurt the running process

Overall a really refreshing session even though I thought some open doors was kicked in,  it seemed like the crowd heard it for the first time.

Leave a comment Comments (0)
 
Life Beyond Distributed Transactions: an Apostate's Opinion Tuesday, November 06, 2007

I'm quite sure that Pat Helland doesn't need an introduction, but he's one of the guys behind the distributed transaction insanity. Today he made it very clear that he thinks differently today:

"Local transactions are good, distributed transactions suck! I'm sorry do I need to be more clear about it?" - paraphrase from his talk today.

This is really comforting to here from someone that really put people in trouble back in the COM+ days where he and other architects, in Sweden amongst other places, put out recommendations to run all business rule components in COM+ since the DTC would do magic for you. I never saw that magic and wondered what I did wrong, today Pat gave me peace.

Pat left Microsoft for a while and worked out in the wild with Amazon. I'm pretty sure that what's changed his view on distributed transactions. He had several points about scalability and not all of them had to do with transactions but a lot of them did.

The first thing he talked about was the differentiation between the "Scale aware" and "Scale Agnostic" parts of an application, where the "Scale Agnostic" part shouldn't know about scaling or where data comes from in but the scale aware should. Typically what he says is that the consumer of a service or process shouldn't know about the technical aspects of how that process or services is scaled out. That includes coordinating transactions, it's not the job of the consumer.

Digging deeper into that thought he talked about "items" (he used to call them entities but didn't want to confuse the crowd with entity framework being pushed hard on TechEd). He defined item as: "a piece of data that can be identified with a unique key and exists at exactly one machine at a time".

A very important point he made about these items was that to scale they needed to be boundaries for transactions. So for example a customer item that had a customer address item put on some other machine for scalability would not update the customer and the customer address in a single transaction but in two separate transactions, one for each item.

If those two really need to be in the same transaction, compose a new item with the rules stated before (single machine, unique identity) and you'll have the transaction boundary right there.

To coupe with this our system need to work with "reservations" of changes instead of actual change. Meaning that I reserve a booking or an order from a customer item since the two of them can't be in the same transaction. This creates some other side affects like the fact that you wouldn't know the exact truth about the state of the system but have to make estimated guesses on what you think.

He gave a couple of good examples. One of them was hotel reservations where you couldn't be certain on exactly how many guest you had one night until they actually slept in your beds, if the hotel where to make breakfast orders they have to guess. What ratio of the reservations will actually be committed and how much breakfast do we need then?

Another example was in shipping, we have shipped a lot of products and got new orders on the products, what is my current inventory level? Do I need to order new products? If I know that 10% of all orders sent out comes back, I can't trust the numbers in my warehouse to be the exact inventory level and therefor I have to plan for this.

This was really interesting and insightful and when applied to messaging and data management the picture got really clear. We aren't good enough on math to actually have an exact number on anything in our business, we always need to account for trends and make qualified guesses. So why would atomic transactions over several items be important? They wouldn't since it wouldn't be the truth anyway.

Pat also talked a lot about messaging and how that was important in a scalable world. How different items connect through messaging and explained some of the more important patterns for messaging between items. I'll probably have reasons to go back to them but I would like some more research around the subject before I do.

Leave a comment Comments (4)
 
Microsoft's Vision for the future of application development... Monday, November 05, 2007

... or in short "Oslo"

Since the community at large did a collective "ohh, ahh" when they first publicly announced this set of technology I went to this presentation with great hopes of getting my foundation shaken and to get a new perspective on software development.

How disappointed I got. Not only did the presentation suck (it has to bee the most pointless demo ever), "Oslo" turned out be not that innovative.

The message of Oslo was the same mantra as we've heard a zillion times before:

There are boundaries between different aspects of an application, there are devs, business analysts IT pro's etc, etc. This is a problem that we need to solve, bla bla bla. Same old same old.

The idea with Oslo is to be a set of technologies that will try to bridge these gaps and help cross those boundaries and do that with executable models. When explaining the technology it seems like what "Oslo" really is about is the next step in Software Factories, DSL's and Service repositories. 

With a 5-leveled technology stack they want to take SF, DSL, Repositories and other artefacts from process and model driven applications into mainstream and follow the already laid down path with workflow foundation and the attempts at DSL (Visual Studio DSL toolkit) and Software Factories they've already invested in.

Three out of five got my attention:

Modeling tool
They want to build a modeling tool that visually express the model you want. Going forward this is not only about boxes but having a visual aid for the model that really suites it purpose, like tabular data entry, hierarchical node views etc, etc.

Modeling language
For this wave of technology they want an expressive modeling language (read Domain Specific Language) where it will be easy to express the model in text and not graphics. Partly because the visual tool will probably not support every scenario and also to make it easy for other tool vendors to build on the rest of the stack while offering a different modeling tool then the one provided from Microsoft.

Model repository
Sondre Bjellås made a great comment about the repository in the break after "It's like source safe for models". That captures the idea with the model repository beautifully. To have different versions of the model and be able to change in runtime or target different versions to different people is a great facility, yes it could be disaster but still. Maybe it'll help sort out the ESB story as well.

Cool? Yes! Helpful? Definitely! Innovative and impressive? Not really!

While I do look forward of seeing some of this technology stack in the wild, it won't change how I think about software development.

Leave a comment Comments (0)
 
Why SOA is broken Tuesday, February 13, 2007

It is no secret that I've been, at best, cautiously curious (to use a Microsoft term for "dissing the concept") when it comes to Service Oriented Architecture. I have many times doubted myself and chased that little bit of information and understanding that must have eloped me since I could not buy into the whole "SOA-Story" that so many smart people seem to love. The story emphasizes the importance of aligning your business applications with your business goals and thus far I?m with them, but it?s in the details the devil lays.

The revelation
In Arosa this year, I found what I was missing. Or rather, someone showed what was wrong with the marketed and communicated SOA picture. The problem with the story many SOA advocates are presenting is that they make the term to wide and make it include more concepts than it should. After discussing these issues back and forth, I suddenly realized; a big part of the story tries to slap "Big Up Front Design" and the Waterfall process model back into the development life cycle again. Business people like to plan ahead, they like to be oracles and they like to pin down every little detail. That is why we got Waterfall and BUFD in the first place. Business people have always liked waterfall and BUFD since that would give them the illusion of control and enforcement. The problem is that the process of driving business and the process of developing software is not even in the same ball park. Much like the difference between the business model a car company have and the model they use to create cars, it differs. When realizing this, I also realized that the story communicated and two of the agile values conflict with each other: 

Working Software over comprehensive documentation.
Responding to change over following a plan.

These two values, which are my two favorite ones, are what make it so hard for me to buy into the communicated SOA concept. I am firm believer of iterative development and letting the solution grow while the team implements each listed scenario. Experience tells me that for each scenario you implement you will most certainly find another scenario that challenges that implementation. Many times you will also, while you implement the scenarios, get new and deeper knowledge about the domain and how different parts of the domain interact with each other and that will change the picture and the implementation.  This is why it has proven so hard to build software based on a BUFD, I tend to look at the project life cycle as a living beast and living beasts respond badly to strictly defined chains strapped to it.

ASOD - Agile Service Oriented Development
I think that to reach its full potential SOA should be divided into two separate practices, targeting different roles in the SOA sphere:

- The SOA business model for aligning business and information, which the business people and enterprise architects can play around with.

And

- The idea of "Agile Service Oriented Development", with scenario based, contracts first development for those that need to implement and maintain the solutions defined by the enterprise.

To do ASOD we don't need to change much, we just need to get the control of the contracts from those who define business to those who implement the solutions. Instead of letting the enterprise architect pass contracts and message definitions to the development team, they could pass stories as and descriptions.

ASOD would emphasis the importance of letting the contracts emerge isolated with the scenarios implemented for each iteration; not worrying at all about what will be in the next iteration. The only difference from other type of systems using stories and iterations is that we would do the implementation with "vertical development", top down. Letting the contract or the part of the contract we are currently working on drive the development.

Off course, since we are talking about contracts, it is very important that we at some point in time handle them as such, sign them off and stamp them with a version number. Though, developing with iterations and scenarios will give us the ability to keep the contracts narrow and flexible and that will gives us the leverage to respond to changes during the lifecycle of the system we are building.

Conclusions.
Whether or not you agree with my description of ASOD as a good solution or not, we have to acknowledge that SOA is in trouble. The road SOA is taking now is the same one as most practices already have done and left. This is probably because some of the strong players push the business side of the concept to hard and forget about the team that needs to implement and manage the solutions. This has never been a good path to go down, in any industry.

We cannot let the SOA community get dragged back into waterfall and BUFD, which will ultimately hurt the progress of the whole software industry. It has to stop.

Leave a comment Comments (15)
 
Re: Taking sides with clemens Thursday, March 23, 2006
In the last couple of weeks, Clemens Vasters and Sten Sundbland (amongst others) have been trying to fire some shells towards the O/R and object-oriented community. Both of them advocating message based architecture over any OO-model.

Clemens on one hand has some great points but most of them are rooted in lack of O/R experience (according to his last post he tried O/R out in 1999 with a hand crafted mapper). Sten on his side tries to throw a huge rock onto Domain-Driven Design.

Now a couple of points these gentlemen are missing;

1) As little as DDD or O/R is a silver bullet, as little is SOA it either. SO is a great architecture for building systems that should interact with each other, but not to effectively build applications. Now with Web 2.0 around the corner, service interfaces grows more in importance but most systems today are not best modeled based on messages.

To begin with, organizations today think of messages to a very little extent. Businesses today are still trying to figure out their internal processes and are not about to start explaining them in messages any time soon. Speaking with clients I'm glad if I get them to understand simple business/application mapping. It might be that I'm no good at it, but almost all of my colleagues tell the same story. 

Some international enterprise sized clients do actually understand messages, but most of them use them in terms of integration, not architecture.

2) The database is storage, not a system driver. How the data is stored is really important, but the storage should not drive the application / system development. Well, for legacy systems it more often than not puts a mark in the constraints column, but then again that is true for all paradigms. Sometimes when I talk to architects that have been around since the white coats, I see reluctance to let the grip of how the data gets stored and retrieved from the storage. This is the same kind of reluctance that C++ programmers still have about the Garbage Collector. I can understand this to some extent, if you're really good at hand crafting SQL you might do a better job than any generic DAL, but then again you might not.

What I don't understand is how messages utterly defining the data storage structure differ from classes defining it.

I'm also a bit curious why no object oriented paradigm ever hit VB1-6 trained developers; they've jumped from data structures representing the tables in a database to SOA without passing OO at all. Why is that? Could it be that VB developers never got the chance to implement OO systems to full extent?

3) DDD is not only an object oriented technique. DDD is built on many practices which captures the business in some kind of model. "The domain". However this doesn't imply "implementation right away kind" of applications. It is possible, although not recommended, to get a functional model without ever writing code. The problem with doing so is that usually when we start to implement code and let it interact, we find flaws in our initial design.

Another strong driving factor is that doing design iterations together with the client will considerably help the architect and the developers to get to know the domain. As well as get the domain expert to get to know how the business process could be captured in their system. These iterations are aimed towards avoiding a scenario where the design just doesn't get it.

Now when it comes to messages, if parts of the system require sending messages, those messages (or message classes) could be defined with subsets of DDD in cooperation with the client for later implementation as a service bus.

Another question where the answer has eloped me is business rules. If the only abstraction against your data is XML documents and messages, where do you put your business rules? In the client or in the database? None of those strategies have been successful in the past years.

One could argue that some of the business rules might fit into some kind of workflow engine. In a couple of cases this really makes sense, but, we don't have a really good workflow engine as of today. So advocating that would really be looking at the future, not the present.

I could rant for hours about these things.

A short summary though, as much as the SOA community claims that people uses O/R or DDD don't understand SOA, they themselves usually don't understand O/R or DDD. I also think it's very dangerous when architects disconnects themselves from the low level details of the implementation of a system and just sits in their space crafts looking down.

Also; a good architect has more than one tool in their toolbox; they learn and understand why and when different paradigms work best. Because you have to remember; in numerous tales, even werewolves got immune to silver bullets.

--
Links:
The sirens are singing: O/R mapping
Taking sides with Clemens

Leave a comment Comments (4)
 
Why distribute Tuesday, January 24, 2006
 

A couple of times I've heard questions been raised around distributed system design. More often then once the question is around performance or rather the performance is questioned.

 

Not many debate over the usefulness of layered design of our systems, the layers are there to separate concerns, make the code more maintainable and foremost gives us the ability of re-use inside our systems at least.  Layers are there for a purpose that most developers can clearly see, grasp and accept (although in the classroom I get question about this more than once as well).

 

Distribution is a different story.

 

Distribution defined

First of all, distribution is not something that can happen over night. It has to be firmly planned for in the application design. A Tier needs to be clearly defined and the communication between the tier boundaries has to be prepared for the overhead of network traffic. Communication has to be chunky not chatty or you will let your system down, bad. Most of this is old dog food and not really my point. The question to answer was that of performance and distribution.

 

"How could the introduction of a possible bottleneck with serialization, network latency and protocol communication be worth the hassle? And how will it make my application perform better?"

 

The key to answer this question is load. For a simple system with just a handful of users distribution is often pointless and in many scenarios hurt performance bad, not mentioning the introduction of extra complexity to maintain. Kicking the load up a couple of notches to hundreds or thousands of users the stakes get changed rapidly.

 

Relieving the database

In most larger systems, the one weak link in the chain is often the database server. With thousands of users the database server need to work serious unpaid overtime to be able to service all connections and requests from the systems. In system design you often resort to read-only caching to solve this problem, you cache frequently accessed data from low transactional stores to ease the pain of the database server. Now if the application is a windows forms client that access the database directly, where would I cache this data? At the client? Well that will help for the single client which is requesting the same data over and over again, but that cache can't be shared across client boundaries and doesn't really do that much of good for the system as a whole. In the ASP.NET scenario we can use the ASP.NET cache for a single server, but when the load forces us to load balance and build a web farm, we're getting into the same kind of non-reusable cache as the windows clients experience.

 

Moving the business layer and the accompanying cache to a separate tier, all applications / web servers could presumably benefit from the same cache and not only have we boosted performance a bit, we've also made our system a bit more scalable and easily runtime maintained. We now only have one cache to worry about (and no, I will not dive into cache semantics or best practices at this time, the correctness of putting the cache here is really another debate, here I simply outline the options and possible benefits).

 

Are there any other performance perks? Well it depends on how you define it. But connection pools are something you'll win a bit on while distributing. If your business logic reside on a separate tier, so will your connection pool. With the effect that all clients / IIS servers will be serviced from the same pool instead of separate ones.

 

Are there any other reasons to distribute?

Since performance is arguably not the killer-reason for distribution, the cache might be better of left to the database server, are there any others? Well one of them is maintainability, whereas the business logic will be easily updated when you only have one server to worry about. Another is security; the tier boundaries can define security policies and effectively shut out any unauthorized access to functionality that otherwise would be accessible from the client code.

 

The SO story is also a possible win, today your system might only service one application, but who knows what happens tomorrow. Maybe parts of your system will be servicing other applications, with distribution you could, at least theoretically, re-use some of the distributed business logic with the same cache, security settings and maintenance model as the original application.

 

Conclusion

So as you can see, even though this post is targeted towards performance. There are not that many performance gains in distribution. Distribution as an animal can't be driven by performance only and we need to carefully plan and investigate the gains we'll get and weigh that against the losses in introducing extra complexity and serializing over process / machine boundaries.

 

Although this topic is hughe and we could rant for several bookshelves, theses are some of the basics one should consider when trying to decide for/against distribution.

 

Some follow up links to ponder:

Application Architecture for .NET: Designing Applications and Services

http://msdn.microsoft.com/practices/apptype/webapps/default.aspx?pull=/library/en-us/dnbda/html/distapp.asp

 

Improving .NET Application Performance and Scalability

http://msdn.microsoft.com/practices/Topics/perfscale/default.aspx?pull=/library/en-us/dnpag/html/scalenet.asp

Leave a comment Comments (0)
 
Architectural reflections Sunday, November 13, 2005
Walking down to the conference venue for the first time I had the privilege to chat a bit with Eric Evans. It turns out that The Man has some interest in building architecture as well and was quite intrigued by Malmoes latest great building "The Turning Torsoe":



We passed the building in question on our way down to the conference and reflected over how the art of software architecture haven't had the chance to evolve as much as the building dito. In perspective, we've built software for less then a hundred years but houses for thousands.

With this in mind though, it's quite fantastic how far we have managed to take software architecture on this short time and I'm really psyched about how far we will take it in the years to come.

Evans claimed that today we should be happy if we could build a Viking Long House and I agree, although I really hope we'll be bulding turning torsos and beyond in my life time.
Leave a comment Comments (2)
 
Should our clients decied about technology? Wednesday, June 29, 2005
Jimmy has put forward an initiativ from Daniel Akenine (Microsoft Sweden) called STOM - Software Trade Off Matrix. You can read his post here http://www.jnsk.se/weblog/posts/stom.htm

Basically it's about listing features we could implement in our applications and associate them with costs to visualize the increase in cost when adding them.

I'm not sure it's such a good idea. For numerous reasons. And I also think that it's attacking the problem at hand from a completly faulty angel.

To begin with most of the features that would actually fit into such a matrix is almost impossible to value in the system as a cost. Things like "internationalization" might be quite easy if you as a company has implemented it loads of times in similar solutions before, but other things that touch architecture and design is totally up to the specifics of the particular system. So the idea of generalization is really dangerous for the project. A matrix without numbers used as a template where the numbers can be added and changed during the course of the development lifecycle would be a better idea, but not as good as it should.

The second major problem with such a matrix is that the project will focus on immediate development costs. This is one of the greatest challenges in any project, to make the client look ahead a couple of years in the future and not look at the invoice dropping in at the end of the month. It might be that one of the features in the matrix is about extensibility, the client sees an immediate cost of 10-15% if hte project focus on that and asks us to cut it since "the changes they want to do in the future is minor anyhow". All developers can see the problem coming here.

I'm a strong beliver in not focusing on costs when defining a system but focusing on the solution the client is really after. I think it's better to, in the requirements and planning phases, have a dialog with the client where you talk about features not as a cost but as complexity vs benefit. A good architect that understands the domain will easliy identify the "nice to haves" from the "need to have" and can narrow down to dicuss just thoose features, might even discuss costs here at this point.

So my recomendation is that instead of having these general templates, get to know your client and the domain and you'll be much more successful.

Although if you decide that a STOM is a good idea, I would add some figures to the matrix

Quality Attribute    Effect on system managment costs    Cost/Time impact when devloping    Cost if added later
Extensibility   -10%                                                                 10%                                                            25%


Leave a comment Comments (0)
 
Is Identity Map a cache? Friday, May 13, 2005
Lately I've encountered a couple of opinions about the Identity Map being a cache.

I've seen it both on the Swedish architecture forum (http://www.swedarch.com) and while beta reading an upcoming book.

I'm not at all sure that I agree.

Martin Fowler states in his book: "An Identity Map keeps a record of all objects that have been read from the database in a single business transaction. Whenever you want an object, you check the Identity Map first to see if you already have it."

Now let's think about the notion record and transaction.

The transaction type mentioned is in layman terms a database operation or a database session. This would indicate a retrieval of several business objects or operations on them.

What the record is, are implementation specific. That would indicate that it could be (as Jimmy Nilsson pointed out) that when implemented, it would work as a level 1 cache. Although I would like to argue that IM isn't a cache per say but more about the notion of making sure no duplicates of business objects are found in the same business transaction. It could be that the record just is and id and the retrieval of the entity will be asking an aggregate for it, effectively killing the idea that IM is a cache variant.

What do you think?

Leave a comment Comments (1)
 
Embrace changes Wednesday, April 27, 2005
In the session with Daniel Akinini, Jonas Samuelson (KnowIT) sent out a thought that there's no way to anticipate what kind of changes an organization's business model might go through. Giving this we might be implementing safe guards and flexibility in our original design which we'll never use and thus build to expensive and to complex systems.

While I do agree with the argument I do think that we also have to accept the fact that the business model will change. Whether or not we can foresee the changes, they're bound to happen or the business will die.

Therefore in building our applications we should build and architect in a way that change will be possible and use tools to prepare for the change. A good example of that is unit testing.

Unit testing is very much about preparing your application for change and will ensure the quality of your application during that change.


Leave a comment Comments (0)
 
Separation of concerns ( Caching code follow up) Sunday, April 24, 2005
I've got a comment that my last blog on caching encapsulation was a bit unclear so I'll try to clarify my recommendation.

The purpose of the blog post was to plant a seed to start thinking of "separation of concerns". It was not intended to be revolutionary at any point but to open the eyes of those who don't yet have grasped the thought pattern. I felt the need for the post because the last couple of weeks I've met dozen of people in my classroom that not yet has gotten the basic idea of the concept.

The recommendation was really about asking you self the question: "do the [random layer or function] really care about this implementation detail?"

In the example I used the caching mechanism as an example many can relate to and inject the thought that different layers don't care about the same things. To make that example complete I should really have encapsulated the use of ASP.NET caching since the gateway really didn't care about that particular implementation detail.

Taking the example one step further I would do some changes:
public sealed class CacheStorage {
   
    public static object Retrieve(string Key) {
        HttpContext currentCtx = HttpContext.Current;
         if ( currentCtx != null) {
            Cache cache = currentCtx.Cache;   
            return cache[key]
         }
         else
            return null;
    }   

    public static void Insert(string key, object data) {
        HttpContext currentCtx = HttpContext.Current;
 
        if ( currentCtx != null) {
        cache.Insert(AuthorsTableCacheKey, authorsTable, null, Cache.NoAbsoluteExpiration, new TimeSpan(0,0,0,30));
        }
    }
}

        public sealed class PubsGateway { 
    private PubsGateway() {}
    public const string AuthorsTableCacheKey = "authorsTableCacheKey";

    public DataTable GetAuthors() {
        DataTable authorsTable = CacheStorage.Retrieve(AuthorsTableCacheKey) as DataTable;
       
        if ( null == authorsTable ) {
            SqlDataAdatper adapter = new SqlDataAdapter("SELECT * From Authors", conn);

            adapter.Fill(authorsTable);
            CacheStorage.Insert(AuthorsTableCacheKey, authorsTable);
        }
   
        return authorsTable;
    }
}

Now we have a bit of "separation of concerns" in both the presentation layer and the gateway.

Taking this thought further; another example could be state. State could be stored in multiple places and most code in the presentation layer or the business objects don't really care. Implementing the same thought as above you can encapsulate that to, e.g.

            public sealed class StateStorage {

                        public static object Retrieve(string key) {}
                        public static void Insert(string key, object data) {}
            }

I hope this clarify my recommendations and at least gives an introduction to thinking in terms of "separation of concerns".

Leave a comment Comments (0)
 
Where do my Cache Code go? Friday, April 15, 2005
Some of you might sight and go:"Not again!" If that's the case, surf on, there's a cool cartoon for you guys over here.

For the rest of you though;

Have you seriously considered where your caching code should go?

There're different caching strategies out there, you got output cache, asp.net cache, sql server cache, custom cache and so on. I will not bore you with a lengthy discussion about what strategy to use but I will dive into one where I day after day see really crappy implementations:

The ASP.NET cache mechanism.

This is a great feature in ASP.NET, you have a facility where you can cache your dataset or domain models nice and easy. And many do! Successfully they increase throughput in their application by leveraging this easy API.

I would like to ask all of you who use it a simple question though:

   "Do the presentation layer care about the caching?"

Think about that for a moment. The presentation layers main purpose is to present, does it care where the data it presents come from? I would argue no.

Therefore I would like to add ?cache code? to an area where encapsulation should apply. Encapsulate your cache code, abstract it away from the presentation and not only will you get nice code reuse, you will also get a nice clean presentation layer which concentrates on presenting.

A quick example of what I propose:

public sealed class PubsGateway {
    private PubsGateway() {}

    public const string AuthorsTableCacheKey = "authorsTableCacheKey";
    public DataTable GetAuthors() {
     Cache cache = HttpContext.Current.Cache;
     DataTable authorsTable = cache[AuthorsTableCacheKey] as DataTable;

     if ( null == authorsTable ) {
      SqlDataAdatper adapter = new SqlDataAdapter(?SELECT * From Authors?, conn);
      adapter.Fill(authorsTable);
      cache.Insert(AuthorsTableCacheKey, authorsTable, null, Cache.NoAbsoluteExpiration, new TimeSpan(0,0,0,30));
     }

    return authorsTable;
   }
}
Leave a comment Comments (2)
 
Is SOA the universal tool? Monday, February 28, 2005
Browsing for material I'll be using at the  MSDN Live seminars I found this little cartoon, summirizing my view on SOA perfectly:

http://www.theserverside.net/cartoons/TalesFromTheServerSide.tss

Remember folks, SOA is just a tool in your tool box.

Leave a comment Comments (0)
 
Are o/r mappers ready for Microsoft prime time? Saturday, January 22, 2005

There's a lot of noise going around the, for Microsoft developers, newest buzz word around: O/R Mapping. Is this just a hype or should developers on the Microsoft platform pay attention to the noise?

Browsing the communities, reading articles and talking to fellow developers, you?ll pretty soon catch that o/r mapping is interesting at a minimum. But is it ready for prime time?

For Java developers, o/r mapping has been around for a long period of time. There's even a specification for it called JDO (the java community is arguing if JDO should make it to v2 or more focus should be put on EJB 3, but that's another story), which would indicate that Java developers already takes o/r mapping serious.

So what is stopping it from entering the Microsoft platform?

When discussing with friends, colleagues and students; there are a some great concerns.

The Availability of products.
Since o/r mapping is a new concept for Microsofties there are just a couple of stable products on the market. Some of the more known ones are LBLGEN, NPersist, NHibernate and DeKlarit.

While these are all promising products, some of them have been in beta for a long time and other still has to prove themselves in profile reference cases.

It's also a problem that no major player has backed any of them up yet. This makes them uncertain since the small companies that develop some of them, and the community process that develops the others; don't really inspire with confidence in the products.

Legacy data stores
Legacy databases are usually built with no knowledge of o/r mapping and therefore the data schema might be hard to map. For developers this is a steep hill to climb and makes o/r mapping just another complexity level in your project, not the help it is supposed to be.

Developer rediness.
O/r mapping is a new concept with a new set of tools, new knowledge to acquire and might be to abstract. In an environment of time consuming projects, developers might not have the spare time to learn a new paradigm.


So what about it? Should I use o/r mappers?

I would definitely encourage you to at least try some out. Maybe o/r mapping isn't suitable for your current project, but more likely it will be for your next or the one after that. O/r mapping isn't a silver bullet, but in certain scenarios it actually makes a lot of sense.

If legacy databases are a problem for your mapping, consider using views to refactor the database schema to suite your mapping better. Even if this is an enormous task, it might just be worth it with the values that o/r mapping brings.

Also don't be disencouraged by arguments telling you that: -"for massive updates you have to bring up a lot of objects to the business layer". This is something that has been thought of and there's a notion of ?set based updates? which addresses that.

If all other fails, most o/r mappers still let you access the db command object directly to override the auto generated commands.

My favorite poison right now is NHibernate, which you can download at www.nhibernate.org.

Leave a comment Comments (9)
 
Principle of layers reduxed. Saturday, January 15, 2005

So, time for the first techie Shout!

The last session at the Lillehammer workshop was a brainstorm about layering principles. I will not dig down in depth of that session since there's already been blogs about that here (Fowler) and here (Helander).

What I do want to share is some thoughts about one of the principles suggested by Martin:

  • There are at least three main layer types: presentation, domain, and data source. 3/9

The numbers at the end are votes, 3 for and 9 against.

This was quite surprising at first, since the definition is rather weak in only saying "at least" and that the three layer types suggested is widely spread among software architects.

Later in the hotel bar Friday night; a bit of an explanation of the numbers surfaced. It seemed that voters in general had interpreted the principle as "There are at least three main layer types present in a system: presentation, domain and data source?. Where as I don't think that was the original intent of the principle by the suggestor; I believe the principle should be interpreted as "There are at least three basic layer types for a system identified". This was the start of an interesting discussion.

How are systems defined layer wise? I think we can all agree that not all systems include those three layers mentioned in the principle. Patrick Linskey made a strong argument that a fair amount of systems might only expose a domain layer and I agree with the argument but I don't really agree with the imposed principle for layered architecture.

In any good system architecture there should always be some kind of "interaction layer" exposed to the outside user. If this layer is a presentation layer, a service layer, a remote facade or a public api; doesn't really matter. As long as it is separated from our domain and is the point where actors other then the system itself interact with the domain logic; I'll be satisfied.

So what is the basis for this argument? Well, good old fashioned contracts with the actor. I would like to have the option to re-shuffle my domain and domain logic without affecting the contract I have with my actors.

The conclusion!? I would like to change the wording from Martins original principle a bit and add two more.

  • There are at least three basic layer types for a system identified: presentation, domain and data
  • A system should expose an "interaction layer" to outside actors; where the meaning of actor is external systems or users.
  • The interaction layer could be any layer type that gives an actor interaction ability E.g., a presentation or service layer.
Leave a comment Comments (2)