Home
  Latest posts
  My Writings
  My Code
  My Gallery
  About me
 
  rssfeed Syndication
 
Bloggtoppen.se
 
 
Links
  Cornerstone
  SweNug
 
Post categories
  misc (48)
  Architecture (21)
  C# (19)
  Asp.Net (2)
  Vb.Net (2)
  Training (7)
  Data (19)
  Events (40)
  Platform (2)
  Orcas (4)
  Updates (3)
  Methods (10)
  Tools (6)
  Announcements (14)
  Languages (1)
  Patterns (6)
  Opinions (11)
  Fun (3)
  Ineta (1)
  Opinion (0)
  Practices (2)
  WCF (5)
 
 
 
Data (19)
LINQ is Just Another Query Language! Monday, October 13, 2008

Just a quick reminder to all the folks out there. Even though LINQ is really slick, it's still a query language and belongs in the same places as other types of query languages do. That is, if you query a database the query should be encapsulated in some kind of DAL component, like the repository, it should not in the UI and almost never in the service layer (small solutions excluded).

Creating LINQ queries in other places suffer from the exact same "shotgun surgery" (sprawl solution) code smell as putting T-SQL queries there.

So in short, refrain from moving queries out of the DAL even LINQ queries.

Leave a comment Comments (7)
 
SQL Summit 2008: The Compression Session Thursday, October 09, 2008

PICT0276At TechEd (https://lowendahl.net/showShout.aspx?id=169) last year this feature got a massive round of applauds. The ability to compress backups and data- and log files. Kalen gave a great talk on this today where she show cased the benefits you get from the new compression features.

Since large databases are expensive in terms of storage cost, performance costs and maintainability costs. Microsoft decided to help DBA's out and ensure that the database can be as small as possible. For the best advantage both for data in memory and on disk, they decided to look at the row storage in the page. The trade off for this limitation in size will be CPU cycles since compression / decompression will add load to the processors.

There is three possibilities in compression:

Row compression

- Compresses data in the columns and makes every row use less space. The compression algorithm used can't take advantage of repeating values in the whole page. This is an extension to the "vardecimal datatype" that was introduced to SQL Server 2005 / SP2 where it tried to store decimals with as few bytes as possible. Row compression will give you a reasonable compression with a small overhead for the CPU.

Page compression

- Compresses whole page and can use a compression algorithm that utilizes the fact that data can be repeated over the whole page. Page also includes row compression. Page compression will give you the best compression with a high CPU cost.

Backup compression

- Compresses the backup file which leads to less disk I/O and smaller backup files.

 

One really interesting option for the compression is the ability to have different compression for different data partitions, which makes it possible to compress data based on rules. Maybe you got a year's worth of data where only the last couple of months are frequently accessed. To get the best performance / compression you can partion that data and apply row compression while the older data will get the full page compression. I can see how this will be very important in data warehouse scenarios.

Speaking about data warehouses; Kalen stressed that the design focus for the compression feature was for large data warehouse installations where we have to handle a vast amount of data and therefor the trade off with CPU power is acceptable, for scenarios where CPU is more important, just stay off compression.

Kalen digged deep into how the compression works, being an application developer with limited knowledge of how pages are layed out, I did get that it was effective, but the exact details eluded me. But that's fine, I'm not going to switch career and start hunting bytes in data pages of SQL Server.

Over all, this session gave a great insight in the problems that compression is set out to solve and how SQL Server actually does the compression.

Thank you Kalen :)

--

Sql Summit 2008 Event site:

http://www.expertzone.se/sql2k8/

More on the compression features:

http://blogs.msdn.com/sqlserverstorageengine/archive/2007/11/12/types-of-data-compression-in-sql-server-2008.aspx

Kalen's blog:

http://sqlblog.com/blogs/kalen_delaney/

Leave a comment Comments (0)
 
Model first in Entity Framework Thursday, September 11, 2008

One of my pet peeves with Entity Framework is that it forces me to generate my model from a database. That constrains me in the kind of modeling I can do for my application and the model will ultimately be a mirror of the database structure, which not necessary was structured in a way that is best suited for my application or the domain at hand.

I know that there was a long discussion about this at the MVP Summit -07, unfortunately they did that on an asp.net session while I was in a C# 3 session. But nevertheless it seems like the team actually understands the challenge and are pushing forward to get the designer to let me model without the constraints of the database.

This post: http://blogs.msdn.com/efdesign/archive/2008/09/10/model-first.aspx  outlines in detail how the team is looking at solving the issue and how the developer experience will look like.

With this and the advertised POCO support, maybe Entity Framework will let me build applications the way I want to and not force me into a design pattern I don't like.

Leave a comment Comments (0)
 
Thoughts about Entity Framework and Upcoming talks Wednesday, September 10, 2008

Entity Framework has been out a while now, I did some talks on the early and late beta bits and have written some code around the RTM bits. It's an interesting piece of technology and I think it will create a bridge between the ALT.NET kind of developers where NHibernate and ORM is an everyday piece of technology and the dataset oriented guys. I really don't think it'll take that much market from NHibernate though. Not this version of EF.

The Entity Data Model is a very interesting modeling tool and have a lot of exciting capabilities, but the object runtime in Entity Framework just lacks to many features to really compete with more mature ORMs.

That said, I do encourage developers to have a look at EF. Especially those of you who haven't worked with ORM yet will find  Entity Framework a very exciting tool to put into your toolbox.

My next challenge with Entity Framework is to get a bunch of database guys to like it. I'll be delivering a talk on EF on the upcoming ExpertZone SQL Summit that will tour around Sweden in October. That's a really cool event and the SQL Server Mega Star Kalen Delaney will make an appearance, along with myself, Tibor Karazi, Roger Cevung and a bunch of other really talented people.

If you got some time to spare and are interested in the latest database technology as well as case studies with SQL Server 2008 in the wild, stop by at : http://www.expertzone.se/sql2k8/

At least to see me get bombarded with rotten vegetables when I try to explain the power in letting EF generating the queries in the application layer ;)

Leave a comment Comments (0)
 
"Are you stupid or just ignorant?" Wednesday, August 27, 2008

Before my vacation I wrote a post on why I prefer a default architecture without stored procedures. As expected  there were numerous comments (it's a hot topic). I think that I tried to do my best to explain where I came from, why I had the opinions that I do and that I thought my life as a developer was much better without having to worry about stored procedures in every corner of my application and services. But even with the effort to be "discussion friendly" I got comments and attacks on my competence, the types of systems I build and my sanity (I deleted that comment).

How come an opinion different from yours has to be an uninformed one? Couldn't it just be that I value different aspects then you do? Couldn't it be that my experience and my scenarios differs from those that you base your opinion on? Couldn't it even be so that you actually was the uninformed one? How could one tell, without knowing a person or her full history? There is no way of knowing if someone is uninformed or very informed. That's why I like to debate and discuss opinions as opinions and facts as facts.

So that's what I'm going to do now. I'll meet some of the comments on the blog here with my opinions and facts, not with "my system is bigger then yours" and "I know more then you" kind of arguments. This is going to get long, so skip to the parts that interest you.

 

why is stored procedures harder to maintain / read / understand? sounds like you guys lack som experience and knowledge in this area.

For querying, DML and such, SQL is easy to read and understand. That is what it is built for. Hence "Structured Query Language". But for business process intensive types of applications it's not as easy to express intention and logic in SQL as in an OO language (or even better a DSL like WF XOML).

sp's are as easy to source control as any other textbased files you have in your sourcecode projects and have been so for quite some time now (and yes even before vs.net was introduced)

I don't agree in that using text files and file system integrated source control extensions qualify as a "good source control story", basically I think it sucks and don't do dependency check out's. To qualify it need to be integrated with my IDE, be at my finger tips  let my easily check in and out version of a database and not only a certain script that I then have to manually run on my development database server. That is what team system gives you and makes the story better.

I'm not sure what you mean by this . So, I'm going to make a guess and say that this has something to do with quickly determining which version of development code is in the database.

Yes that is exactly what I mean. One of the projects I maintain is a multi-tenant/single tenant SaaS application and without the discipline and versioning tables it has been hard to maintain. For the logic we do have in the DB we need extensive documentation for every tenant to make sure what version of what goes where, which also results in duplication of code and fear to update.

I see the point that many DBAs have - if they want to change the underlying schema for performance/storage considerations, they should be able to do so as long as they provide developers with the same set of outputs as before, given the same inputs of course.

There is other ways to achieve the same thing for a DBA without constraining the developer. Use views for instance. Views are the tool for structural integrity. Not sprocs.

There are ways to dynamically generate WHERE and ORDER BY clauses using T-SQL and parameterized procedures/queries.

Which usually involves dynamic SQL inside the sproc.

Not to sound insulting, but the demand for multiple potential join combinations sounds like, to me, a poorly planned data access scenario.

Not at all, it's all about not knowing what kind of joins I want until I see them. In agile development environment you incrementally build up your code, for every story you implement you add more code. For some stories you might want the join for others you don't. Having to add multiple stored procedures for that is painful when a tool can consider the best join for my scenario. I'm not talking about joins changing in runtime, I'm talking about being flexible during development.

Your arguments shows some great lack of understanding of what a stored procedure is
and what it gives to you. You probably never ever worked on anything big or your just
complete ignorant.

Of course, as I said before. Any opinion different from yours is probably based on those things you mention.

1. A stored procedure can be representated by a text-file.

Agreed. Read my above comments on that.

How do you keep control of your C#-code?
Don't you see it's the same problem. How do you build that code?
How would you know what versions of your classes you put in to your .exe?

Eh, no. That code is very simple to branch, check out and build a new version (or old) of. DB's aren't for the reason you yourself outline:

You can't just drop a table and recreate it because you whant a new column. As you understand
we need to make sure all the data stays aswell

Exactly, and hence the bad versioning story for stored procedures. I can just check out any C# code from any label or branch and create a new version and deploy. DB's are much much harder.

How can using 2 different technologies be less flexible?
Just use the one that does the job best and makes sure you layer your application.

Using 2 technologies might not be, maintaining two sets of skills are. If there is a tool that can let me concentrate and focus on one of the skills. Why would I even want to worry about the second? In the early days I did the usual c / assembly thing. I loved it when I could drop assembly and just focus on C++. Wouldn't you?

SP:s are desinged for logic on data

I disagree, SP:s are designed for querying logic on data. Not for business process kind of logic. There are multiple tools that are much better on that. Even the relational databases acknowledge that, that's why we got cubes and languages to work on the cube for BI scenarios.

With this kind of reasoning I guess you even check all constraints in C#.

Of course I do. I want the tool best suitable for the task at hand.

SP:s automagically handle the isolation level that you need. And this with no impact on
database scalibilty. Try doing that with dynamic SQL.

Gladly and with no problem at all. Isolation levels for the transactions you need are as easy to set from code in then in Sprocs. I'm not sure what you mean with "automagically", how would sprocs know what isolation level I want on my data?

and scaliblity will go down

I disagree, for the scenarios I'm working there is no more impact on scalability when controlling transactions or isolation level from code rather then in a sproc. I would be happy to hear in what scenarios the scalability loss occurs and why.

How can they be hard to test?

I'm not sure what standards you set for your testing. But for me a single execution in a query window, debugging or watching a profiler trace isn't "testing" for me when it come's to business logic. I would encourage you to read something from Beck or anything on unit testing or even TDD.

Ever hear of CRUD SP:s. These are the trivial ones you talk about. And yes there has been tools
to generate them for at least 10 years.

Agreed. Nothing new under the sun.

1. All queries are pre-parse even before the first time they are executes.
In a busy DB-application the thoughets job is not to actually execute the SQL
But rather to parse all the SQL. (Check Syntax, Grants, Synonyms, Optimize)
So imagine what would happen with Dynamic SQL. It just kills scalibility.

Not true, at least in SQL Server 2005 > dynamic sql will be parsed and the query plan cached. Not before it executes the first time, but it's enough for any performance issues you might have. The pre-compile is a hughe issue btw and not as straight forward as you claim. In some scenarios the pre-compilation is actually a drawback. That's why we got the "with recompile" keywords.

All objects in the database have something called strong dependecy check.
This doesn't exist in your .NET world. And this is a huge help if you need to change something.
You know like maintain you application. An example.
If you drop a column on a table then all SP:s that refer to that column will automagically become invalid
This happens instantily and not at run-time as with normal code

Instantly? In my environment (SQL2K5, SQL2K8) I get nothing until the sproc is executed. Also, what if my business logic or presentation depends on that column? You will break my application.

A question to you, how do you define scalability? For me it's the ability for the system to handle more users when adding more hardware. You use the word as it has something to do with query performance?

All in all, I talked about a default architecture, using the right tool for the right job. Some overuse C#, others over use sprocs. I tend to avoid sprocs for any data access since tools can generate the SQL for me. That kind of architecture works very well for a lot of scenarios, maybe not for all, but a sufficient amount to mark it default.

Link to the original post and comments: https://lowendahl.net/showShout.aspx?id=213

Leave a comment Comments (2)
 
Finding the DataContext for a Single Linq To Sql Object - A Second Attempt Friday, February 08, 2008

You can find the first attempt here: https://lowendahl.net/showShout.aspx?id=184

The basic idea is to find the DataContext used when fetching a single object from that object later in the application life-cycle.

I've re-written the whole enchilada now to an extension method that extends INotifyPropertyChanging.

The syntax for getting the context is now:

DataContext context = propduct.GetDataContextFromMe();

or

NorthwindDataContext = product.GetDataContextFromMe();

For a typed version.

You can find and download the extension class here: https://lowendahl.net/shoutCode.aspx

Leave a comment Comments (0)
 
Finding the DataContext for a single Linq To SQL Object - A first attempt Friday, February 08, 2008

Update: You can find the second attempt here: https://lowendahl.net/showShout.aspx?id=185

In some cases you would like to find the DataContext of an object you were handed. It turns out that it's a bit more tricky then what first meets the eye. Since the objects themselves don't expose a DataContext property I decided to write a property that did.

Based on the fact that the DataContext get's change information from objects through the INotifyPropertyChanging interface and event I assumed that it would be a piece of cake to back-track the context through the event registration with GetInvocationList. Well that part went well, but it turns out that the object that listens to the event is a subclass of System.Data.Linq.ChangeTracker which is marked private/protected along with its children. Nifty little feature in LINQ to SQL isn't it.

After digging around with Ildasm I figured out one possible solution using reflection to get hold of the DataContext. It turned out something like this:

 

 public partial class Product 
    {
        public DataContext DataContext 
        {
            get 
            {
                if ( this.PropertyChanging != null  )
                {
                    foreach (Delegate item in this.PropertyChanging.GetInvocationList())
                    {
                        if (IsChangeTracker(item.Target))
                            return GetDataContext(item.Target);
                    }
                }

                return null;
            }
        }

        private DataContext GetDataContext(object value)
        {
            Type typeOfValue = value.GetType();
            FieldInfo servicesField = typeOfValue.GetField("services", BindingFlags.NonPublic | BindingFlags.Instance);
            object services = servicesField.GetValue(value);
            Type serviesType = services.GetType();
            PropertyInfo contextProperty = serviesType.GetProperty("Context", BindingFlags.Instance | BindingFlags.Public);

            return (DataContext)contextProperty.GetValue(services, null);
        }

        public static bool IsChangeTracker(object value) 
        {
            Type typeOfValue = value.GetType();
            Type baseType = typeOfValue.BaseType;

            if ( baseType.Name == "ChangeTracker" )
                return true;
            else 
                return false;            
        }
    
    }

 

There is probably tons of stuff to do with the reflection code, but here is the basic principle of the matter.

Leave a comment Comments (4)
 
An overview of SQL Server 2008 Tuesday, November 06, 2007

Tuesday began with an overview of SQL Server 2008. I haven't really looked into the next release since there are so much other technology that have had my attention so this was a great opportunity to brush up. 

 

Since I work at the ATE booth I couldn't stay to the end but here's my take-away's from that session:

Intellisense in the query window!!!!

Title says it all.

Entity Framework

Contraire to popular belief. Entity Framework is not a SQL Server 2008 technology. I've known this for some time but since they showed EF in this overview talk I decided to ask. The message was basically that SQL Server benefits from using EF as the conceptual model over the storage while EF won't have any particular benefits from using SQL Server 2008 instead of any other RDBMS.   

SQL Server Compact Edition

This is a new and improved version of the in process database that Microsoft already shipped earlier versions of. This is a new and improved version which will replace SQL Express as the default database engine in Visual Studio 2008.

It's a really  capable In Proc engine with support for synchronization with a larger engine like a fully fledged SQL Server which enables a really interesting set of scenarios for occasionally connected clients and for stand alone applications.

It has no Stored Procedure support nor does it support table value functions but it has a vastly improved change tracking and conflict detection when synchronizing with a master database then previous versions did.

I'm really excited over this technology and will play around with it when I get home. More to come there.

Performance

SQL Server 2008 will come with a data compression which massively reduces the footprint the database and backup files will take on disk. It's also heavily optimized so compression / decompression is lightning fast, or so they say ;)

Another performance issue they've addressed is that of resource management. When several applications or, users of different part of an application, do time consuming work. There where really no way to limit how much resources that time consuming work could take up. So if that heavy report ran first, the rest of the users might have to share a very limited set of resources. In SQL Server 2008 there will be a resource governor that will configure workload sets (processor power, memory consumption etc) and assign them to work based on user, application, query etc. A really great feature !!

Improvement in Security

In SQL Server 2005 they added the ability to encrypt data in the database. There was a challenge with that specific set of technology since the encryption / decryption wasn't transparent to the data access API's. This meant a bunch of custom code in the applications to support encrypted data in the tables. There where also challenges in the key management where keys couldn't be as external as some scenarios required.

In SQL Server 2008 both these issues has been addressed. The encryption / decryption is transparent to the data access api's and keys are handled more flexible.

They also added Data Auditing to audit what happens to the data, kind of a security log for tables. This will make it possible to track who did what and when.

Management

The new management model has capabilities to create policies that a database need to conform to. These policies would define a set of rules that a DBA decides databases in the enterprise needs to follow and can then apply them to a server to enforce them or to check if their followed. Together with the new multi-server management feature this means that a DBA can set up policies and run them across the enterprise to configure the server environment. The API's are also available from powershell and hooks nicely into the Design For Operations and Dynamic Systems Initiative from Microsoft which means that you will be able to administrate and configure databases centrally using system center.

Leave a comment Comments (7)
 
Entity Framework Beta 2 released! Friday, August 31, 2007

Read more at the ADO.NET Team Blog (http://blogs.msdn.com/adonet/archive/2007/08/27/entity-framework-beta-2-the-1st-entity-framework-tools-ctp-released.aspx):

Leave a comment Comments (0)
 
ORM and legacy data Sunday, June 03, 2007

A week or so ago Gavin from the Hibernate team wrote a very humoristic post (you can read it http://blog.hibernate.org/cgi-bin/blosxom.cgi/2007/05/23#in-defence) where he made a clear case for ORM against OODBM. One of his points was that ORM is the only one of the two that can handle legacy in a satisfying way.

I have used ORM in several projects over the past years, and at least 2/3 of those has been against some kind of legacy database. The thing about legacy (or really screwed up design of the data model) is that they often don't conform to even the second rule of normalization. Mostly they are flat representations of the business problems at hand.

I though I should share one of my absolute nightmare stories and elaborate a bit on the design choices that where made to support the applications scenario over that nightmare.

This scenario is based on a database where the data model was built to support the the lowest common denominator of several RDMBS, where SQL Server, Oracle, DB2 and Sybase is just some of them. This means a data model with almost no normalization for some parts and extreme normalization for others. On top of that I'm certain that parts of that model has been written by an trainee or a monkey of some sort.

There is a table with date bookings, thees date bookings can be one of two things. Either it is a stand alone booking or it is a booking connected to a resource. If the booking is standalone, the ITEM_NO column will hold the value "Appointment" and the description column will hold a value. If it is connected to a resource, the ITEM_NO column will contain an ID of that resource and description will instead be null.

While ORM is really good at handling a lot of different mapping scenarios, some are just not mappable at all when it comes to object models and if it is really hard to find an object model to hold the data, the ORM will fall flat.

In this scenario it was really hard to create a model that actually made sense. Since the domain had no interest in tracking back to the resource but just wanted the resource name as a description if the description was empty. The idea was to only have a Appointment class with the information.

The first idea was to use inheritance and create different mappings for the two scenarios. Now that did not really work out. Inheritance in ORM uses two approaches, one is "table per class" which weren't suitable here since we only had one class the other is using a discriminator column to choose what concrete class to use. Not suitable as well. There was two possible ways of deciding how to separate them, either by the ITEM_NO column or the by the Description column and both of them had no clear distinct way to discriminate between concrete classes.

The second idea was to create a many-to-one relationship with the item class where we just got the value for the resource and use encapsulation in the Description property to make sure that we got the right information presented.

Something like:

protected virtual Resource Resource {...}
public virtual string Description {
  get {
     if ( Resource != null )
      return Resource.Description;
     else
      return _description;    
   }
}

This is a perfectly valid solution. The problem I had with this was that the data model actually got to put constraints on my domain model and made it ugly, though functional. I did not want that.

So instead I let the Data Model get constraints from the Domain Model. The approach for this scenario that I took was to create a view in the database that supported a clean model. The view looked something like this:

create view vwAppointments
as
select no, dateBooked,
      case when resources.description is null then
          appointments.description
     else
         resources.description
     end as description
from appointments
left outer join resources
on resources.item_no = appointments.item_no

Joy! Now I could model my scenario as it was supposed to be in the application and not care about the constraints the legacy database put upon me.

So, yes most ORM are good at mapping legacy, but helping them a bit by cleaning up the model with some nice views do help.

kick it on DotNetKicks.com
Leave a comment Comments (3)
 
Putting a nail in the OODBms coffin Thursday, May 24, 2007

Frans just made me aware of an hilarious blog post  on why OODBMs are not the silver bullet. Not even near.

This is a must read: http://blog.hibernate.org/cgi-bin/blosxom.cgi/2007/05/23#in-defence

Leave a comment Comments (0)
 
The death of the persitance ID Saturday, December 23, 2006
In the last couple of weeks I've been in numerous discussions about the object vs. the data domain. Yes, really, a discussion about them vs. each other.

I'll write a longer post derived from that discussion in the coming week.

One issue that came up that even developers that conform to DDD has a hard time to agree on is the presence of the persistence Id. I would argue that the persistence Id only are of interest for the persistence engine, not for the domain model.

Why do I claim this?

The domain model should be a basis for capturing the business domain. I'm not really sure what good a regular persistence ID would do the business. Have you ever heard a domain-expert talk in the terms of long GUIDs when they describe their artifacts? No, they usually use the business ID to label and identify objects on the business. If we are to be true to that business domain in the domain model; wouldn't the more correct approach be to hide the persistence ID and let the business ID be the identifier?

Now, I'm not arguing that we shouldn't have a persistence ID, just that it shouldn't have any importance in the domain model except for persistence purposes. I'm aware that many developers use the persistence ID in list and such to identify objects, but I really think this is case of where history is still clinging on in our applications.

Well, even If I can't convince you, 2007 will be the year where my domains will look more like this:

   public class Certification {
    private Guid PersitanceID { get; set;};
   public string Code { get; set; };
}
Leave a comment Comments (1)
 
MS O/R Story || Man In Black 3.0 Saturday, May 13, 2006
The strangest thing happend today. All documents and channel 9 movies about EDM and eSQL has vanished from the public net.

Also Jimmy Nilsson talked about helicopters and men in black before he promptly vanished from MSN, haven't seen him since.

I'll get back on this, or maybe I don't ...

Leave a comment Comments (3)
 
Microsoft goes O/R Thursday, May 11, 2006
No I'm actually not referring to DLINQ. Or at least I don't think I do. Jimmy Nilsson pointed me to a link today where the ADO.NET team announced some of the new features they where thinking about for ADO.NET 3.0.

Looking at the link ( http://blogs.msdn.com/dataaccess/archive/2006/05/10/594797.aspx ) I got a bit confused. As I've come to understand it. DLINQ would be a huge part of what ADO.NET 3.0 was to become. At least that was the story at the PDC and in numorous chats after that. Althogh the ADO.NET team communicates a different picture.

After looking at a channel 9 clip ( http://channel9.msdn.com/Showpost.aspx?postid=191667 ) some things got a bit clearer but rattled my world a bit. In the movie the team actually speaks a language I've been waiting for them to speak. They talk about differentiate the "storage structure" and the "conceptual structure". Meaning that how the database looks should not matter in your application (or service). If the storage structure changes because of some strange DBA madness, you should not have to rewrite you're entire applications but just change the mapping files. This is an initiative I really like.

They also talk about hiding the infrastructure and plumbing so we as developers shouldn't need to care about stuff like queries and joins. Again a new grip from Microsoft. They actually work hard to make sure that the ADO.NET bit won't show in our applications. And why should it? My focus area is not data access but to solve the business problem at hand.

Another surprising point they're making is that about Storage Independence (well ODBC and OLE DB was supposed to solve that as well ;) where they talk about using the mapping file as the single point of DB dependency and let your application structure be intact. The effect of that would be that our objects could be coming from Web Services, Databases or filesystems and we would not know the difference anywhere but in the mapping file. Now if they can pull that one of ...

I don't claim to have a full grip about their strategies but after watching the movie and a quick read through http://msdn.microsoft.com/data/default.aspx?pull=/library/en-us/dnadonet/html/nexgenda.asp I'm seeing a glimpse of hope.

Maybe Microsoft really gets it this time around.

btw, while they use LINQ in the bits they're showing, I can't see where DLINQ fits in. What's up with that? Anyne know?
Leave a comment Comments (0)
 
Where do my SQL go? Saturday, January 21, 2006
Many are the battles between Stored Procedures and runtime generated (ad-hoc) queries and I suspect there is a lot more to come. The war partly springs out of the momentum O/R mappers with automted queries are getting, sometimes when listening to arguments I feel like DBAs are afraid for their job.

Anyhow,

Doug Reilly has written a nicely balanced article that tries to give the reader proper technical facts behind all those arguments pro and con to the different sides (performance, security, maintainance etc..). I'm all for runtime generated queries as default and resort to stored procedures when those will give you a clear and definate advantage. Even though Doug is a SP advocate, his article gave me an increased confidence that my choice of query wasn't all that bad :)

You can read his article here: http://www.developerfusion.co.uk/show/4708/ If you got some time over I strongly suggest you follow the link to his blog and read the follow up comments as well.
Leave a comment Comments (0)
 
Typed datasets in ADO.NET 2.0 Friday, January 13, 2006
Last weekend I played around a bit with some typed datasets while me and a friend built a new foundation for our club site based on the Club Starter Kit. The starter kit relies heavy on typed datasets (in all scenarios but on where they fetch the data with a dataset then map it over to a generic list containing memberinfo objects. Consistent? Not really!).

A couple of observations I made during thoose two days:

TableAdapters
TableAdapters are classes (following the Table Module pattern) added to a DataTable with methods to query for data in certain scenarios. For example, you can configure a table adapter to have a GetCustomer(int) method that queries for a single customer  and a GetCustomers() that queris for all  of them.

The methods are autogenerated afters some configuration, like selecting the t-sql query you want to use, and ready to use instantly from object data sources. For our small club site it was quite a nice feature and made oss really RAD

Generics?
I cannot see that they've used generics in the base dataset class. Why not is a question I ask myself. Might be because of backwards compability. Thous they haven't addressed the boxing / unboxing for value type columns in the datasets.

Nullable types?
Is not supported. For columns that aren't strings, the only settings supported for Null values are "Throw Exception". Really annoying.

Well, I think I'll stick to my Domain Models when building something that should last.
Leave a comment Comments (0)
 
Wadda ya mean typed dataset? Monday, November 07, 2005
I might be kicking in some open doors here but when running through some of my demos for öredev I just realized, Typed datasets aren't typed, they're just faking it.

Drilling down into the generated code I found this little beuty hiding in the row class:

    public int ProductID {
        get {
             return ((int)(this[this.tableProducts.ProductIDColumn]));
            }


In my book this is definatly not strongly typed. You still have the boxing and unboxing for all the value types and you have the explicit cast operation for all types.

The only benefit the "typing" really gives you is name resolution and automated casting. Well guess if the DataSet stock raised or dropped at my stock exchange?

I appologize for thoose of you whom already kicked this door open, I just didn't see your kicks or doors.

PS! At a quick glance, it doesn't look like 2.0 datasets are using generics, so it's not any better there !DS

Leave a comment Comments (0)
 
NHibernate v1.0 released! Thursday, October 13, 2005
At long last, NHiberante has hit v 1.0.

Check it out:

http://wiki.nhibernate.org/display/NH/2005/10/10/NHibernate+1.0+Released



Leave a comment Comments (0)
 
ADO.NET goes System.Data V2 Monday, April 04, 2005
So, what's happening to data access in Whidbey? I'll list my favorites and dig down in some drawbacks.

DataTable as stand-alone container I never understood why Microsoft put as much emphasis on the DataSet class as they did in V1. I've even seen people who actually thought that the DataSet itself contained data as a result of that.

The big thing with the using the V1 DataSet was the possibility to serialize the data using the xml serializing process. In V2 they've fixed this.

The DataTable class can now be individually serialized.

Asynchronous execution Yes! In V2 you will be able to execute asynchronous commands to the database. Well it's not as great as it sounds, you won?t get any progress report and you can't kill the thread gracefully since it's out in unmanaged space.

This really really means that the async execution is mearly a shortcut for us so we won't need to wrap the call with an async delegate.

The async implementation follows the exact same pattern as the rest of the async methods in the framework. Like the file I/O and the previous mentioned async delegates.

M.A.R.S - Multiple Active Result Sets Another nice feature only supported by the SqlClient with Sql Server 2005, but anyhow a nice feature.

The M.A.R.S will allow you to nicely execute several commands on a single connection and work on the results simultaneously.

Update: Forgot, I'm missing the early beta feature of ExecuteSqlRow and the SqlDataRow. A disconnected row object. I really liked that.
Leave a comment Comments (0)