Before my vacation I wrote a post on why I prefer a default architecture without stored procedures. As expected there were numerous comments (it's a hot topic). I think that I tried to do my best to explain where I came from, why I had the opinions that I do and that I thought my life as a developer was much better without having to worry about stored procedures in every corner of my application and services. But even with the effort to be "discussion friendly" I got comments and attacks on my competence, the types of systems I build and my sanity (I deleted that comment).
How come an opinion different from yours has to be an uninformed one? Couldn't it just be that I value different aspects then you do? Couldn't it be that my experience and my scenarios differs from those that you base your opinion on? Couldn't it even be so that you actually was the uninformed one? How could one tell, without knowing a person or her full history? There is no way of knowing if someone is uninformed or very informed. That's why I like to debate and discuss opinions as opinions and facts as facts.
So that's what I'm going to do now. I'll meet some of the comments on the blog here with my opinions and facts, not with "my system is bigger then yours" and "I know more then you" kind of arguments. This is going to get long, so skip to the parts that interest you.
why is stored procedures harder to maintain / read / understand? sounds like you guys lack som experience and knowledge in this area.
For querying, DML and such, SQL is easy to read and understand. That is what it is built for. Hence "Structured Query Language". But for business process intensive types of applications it's not as easy to express intention and logic in SQL as in an OO language (or even better a DSL like WF XOML).
sp's are as easy to source control as any other textbased files you have in your sourcecode projects and have been so for quite some time now (and yes even before vs.net was introduced)
I don't agree in that using text files and file system integrated source control extensions qualify as a "good source control story", basically I think it sucks and don't do dependency check out's. To qualify it need to be integrated with my IDE, be at my finger tips let my easily check in and out version of a database and not only a certain script that I then have to manually run on my development database server. That is what team system gives you and makes the story better.
I'm not sure what you mean by this . So, I'm going to make a guess and say that this has something to do with quickly determining which version of development code is in the database.
Yes that is exactly what I mean. One of the projects I maintain is a multi-tenant/single tenant SaaS application and without the discipline and versioning tables it has been hard to maintain. For the logic we do have in the DB we need extensive documentation for every tenant to make sure what version of what goes where, which also results in duplication of code and fear to update.
I see the point that many DBAs have - if they want to change the underlying schema for performance/storage considerations, they should be able to do so as long as they provide developers with the same set of outputs as before, given the same inputs of course.
There is other ways to achieve the same thing for a DBA without constraining the developer. Use views for instance. Views are the tool for structural integrity. Not sprocs.
There are ways to dynamically generate WHERE and ORDER BY clauses using T-SQL and parameterized procedures/queries.
Which usually involves dynamic SQL inside the sproc.
Not to sound insulting, but the demand for multiple potential join combinations sounds like, to me, a poorly planned data access scenario.
Not at all, it's all about not knowing what kind of joins I want until I see them. In agile development environment you incrementally build up your code, for every story you implement you add more code. For some stories you might want the join for others you don't. Having to add multiple stored procedures for that is painful when a tool can consider the best join for my scenario. I'm not talking about joins changing in runtime, I'm talking about being flexible during development.
Your arguments shows some great lack of understanding of what a stored procedure is
and what it gives to you. You probably never ever worked on anything big or your just
complete ignorant.
Of course, as I said before. Any opinion different from yours is probably based on those things you mention.
1. A stored procedure can be representated by a text-file.
Agreed. Read my above comments on that.
How do you keep control of your C#-code?
Don't you see it's the same problem. How do you build that code?
How would you know what versions of your classes you put in to your .exe?
Eh, no. That code is very simple to branch, check out and build a new version (or old) of. DB's aren't for the reason you yourself outline:
You can't just drop a table and recreate it because you whant a new column. As you understand
we need to make sure all the data stays aswell
Exactly, and hence the bad versioning story for stored procedures. I can just check out any C# code from any label or branch and create a new version and deploy. DB's are much much harder.
How can using 2 different technologies be less flexible?
Just use the one that does the job best and makes sure you layer your application.
Using 2 technologies might not be, maintaining two sets of skills are. If there is a tool that can let me concentrate and focus on one of the skills. Why would I even want to worry about the second? In the early days I did the usual c / assembly thing. I loved it when I could drop assembly and just focus on C++. Wouldn't you?
SP:s are desinged for logic on data
I disagree, SP:s are designed for querying logic on data. Not for business process kind of logic. There are multiple tools that are much better on that. Even the relational databases acknowledge that, that's why we got cubes and languages to work on the cube for BI scenarios.
With this kind of reasoning I guess you even check all constraints in C#.
Of course I do. I want the tool best suitable for the task at hand.
SP:s automagically handle the isolation level that you need. And this with no impact on
database scalibilty. Try doing that with dynamic SQL.
Gladly and with no problem at all. Isolation levels for the transactions you need are as easy to set from code in then in Sprocs. I'm not sure what you mean with "automagically", how would sprocs know what isolation level I want on my data?
and scaliblity will go down
I disagree, for the scenarios I'm working there is no more impact on scalability when controlling transactions or isolation level from code rather then in a sproc. I would be happy to hear in what scenarios the scalability loss occurs and why.
How can they be hard to test?
I'm not sure what standards you set for your testing. But for me a single execution in a query window, debugging or watching a profiler trace isn't "testing" for me when it come's to business logic. I would encourage you to read something from Beck or anything on unit testing or even TDD.
Ever hear of CRUD SP:s. These are the trivial ones you talk about. And yes there has been tools
to generate them for at least 10 years.
Agreed. Nothing new under the sun.
1. All queries are pre-parse even before the first time they are executes.
In a busy DB-application the thoughets job is not to actually execute the SQL
But rather to parse all the SQL. (Check Syntax, Grants, Synonyms, Optimize)
So imagine what would happen with Dynamic SQL. It just kills scalibility.
Not true, at least in SQL Server 2005 > dynamic sql will be parsed and the query plan cached. Not before it executes the first time, but it's enough for any performance issues you might have. The pre-compile is a hughe issue btw and not as straight forward as you claim. In some scenarios the pre-compilation is actually a drawback. That's why we got the "with recompile" keywords.
All objects in the database have something called strong dependecy check.
This doesn't exist in your .NET world. And this is a huge help if you need to change something.
You know like maintain you application. An example.
If you drop a column on a table then all SP:s that refer to that column will automagically become invalid
This happens instantily and not at run-time as with normal code
Instantly? In my environment (SQL2K5, SQL2K8) I get nothing until the sproc is executed. Also, what if my business logic or presentation depends on that column? You will break my application.
A question to you, how do you define scalability? For me it's the ability for the system to handle more users when adding more hardware. You use the word as it has something to do with query performance?
All in all, I talked about a default architecture, using the right tool for the right job. Some overuse C#, others over use sprocs. I tend to avoid sprocs for any data access since tools can generate the SQL for me. That kind of architecture works very well for a lot of scenarios, maybe not for all, but a sufficient amount to mark it default.
Link to the original post and comments: https://lowendahl.net/showShout.aspx?id=213
|