|
|
C# (19)
LINQ To Objects for .NET Framework 2.0 applications |
Tuesday, April 01, 2008 |
Roger just made a library with extension methods for LINQ available: http://rogeralsing.com/2008/03/31/linq-to-objects-for-net-2-available/
In his comments there is a link to another project that has done the same: http://www.albahari.com/nutshell/linqbridge.html
This is the power of Microsoft's Red / Green approach to framework enhancements. Being additive and adding features to v2.0 of the CLR. Now if someone just could implement LINQ to SQL for 2.0 ;)
|
|
Comments (0) |
|
Using C# 3.0 extension methods for .NET Framework 2.0 applications |
Saturday, March 29, 2008 |
I just posted about how you could use some of the features in C# 3.0 for 2.0 applications (https://lowendahl.net/showShout.aspx?id=191) in that post I stated that it wasn't possible to create and compile extension methods for 2.0 applications. I found a workaround.
The compiler looks for a the attribute System.Runtime.CompilerServives.ExtensionAttribute to decorate the extension methods with. It turns out that this dependency is loose, meaning it doesn't have to come from System.Core. So I just added:
namespace System.Runtime.CompilerServices
{
[AttributeUsage(AttributeTargets.Method | AttributeTargets.Class | AttributeTargets.Assembly)]
public sealed class ExtensionAttribute : Attribute
{
public ExtensionAttribute() { }
}
}
to my project and now extension methods and the usage of the same compiles fine:
public static class StringExtensions
{
public static int ToInt32(this string value)
{
return int.Parse(value);
}
}
static void Main(string[] args)
{ string s = "42";
int i = s.ToInt32();
Console.WriteLine(i);
}
Now for my next trick I'll look into getting LINQ to work as well.
|
|
Comments (5) |
|
C# 3.0 For .NET 2.0 applications |
Saturday, March 29, 2008 |
How about using lambdas, anonymous types, type inference, automatic properties and lazy initialization in your .NET Framework 2.0 application? Using the multi targeting feature in Visual Studio 2008 will help you do that. With Multi-Targeting in VS 2008 can build application for .NET Framework 2.0 -> 3.5 by just setting a project property to inform VS what version to target.
Selecting the target framework will compile the application to run on the specified version. This makes it quite easy to upgrade from VS2005 to VS2008 since you don't have to move your codebase into V3.5.
Note: While the application will run on .NET Framework 2.0 the project files will be specifically for VS2008, so if the developers in your team is upgrading, makes sure all of them can before letting anyone convert the project files.
This is a really nice feature in VS2008. But what does that has to do with the title of this post?
When investigating how visual studio does the multi targeting one of the first thing you will notice is that Visual Studio is using the CSC compiler from v3.5 of the framework:
There is no special flag sent to CSC to tell it to compile for V2 it just compiles. This works since the real difference between V2 and V3.5 lies in the class libraries and not in the CLR / CLI specifications. Both versions of the framework shares V2.0.50727 (3.5 just adds a service pack to it) and when building apps for V3.5 we event get the base class library mainly from V2 so mscorlib and it's siblings is not upgraded to V3.5.
The effect of this is that you for the projects you upgrade from VS2005 or applications you build that will target V2; will be able to use C# 3.0 features since they are all compiler tricks. Well almost all of them, extension methods require a reference to System.Core.dll which isn't available when targeting V2 of the framework. I've tried and confirmed that lambdas, anonymous types, type inference, automatic properties and lazy initialization will work fine for your V2 applications using Visual Studio 2008.
So if the CLR and the compiler is indifferent, what does differ from V2 and V3.5? It's really about what class libraries are shipped with the different version, targeting V2 Visual Studio will filter out the assembles that won't be available on a V2 installation.
So upgrading to VS2008 will have more benefits then just nice new colors and some new analyzing tools if your building applications for V2 of the Framework, you will also get some of those nice productivity enhancement that C# 3.0 brings to the table.
Now are there any reason not to upgrade at this point?
|
|
Comments (6) |
|
LINQ: Killing Reflection Softly |
Tuesday, February 26, 2008 |
Roger just IM'ed a link to his blog: http://rogeralsing.com/2008/02/26/linq-expressions-access-private-fields/ where he removes the need for reflection to access private fields by creating an expression tree instead. Reeally cool and a lot more performant then reflection:
"The Reflection.FieldInfo approach took 6.2 seconds to complete.
The Compiled lambda approach took 0.013 seconds to complete.
That?s quit a big difference."
|
|
Comments (0) |
|
Composing entities in a SOA using LINQ. |
Tuesday, January 22, 2008 |
Recently I was put in front of a scenario where data from two different services had a relation and needed to unite to form a meaningful reply to consumers. The scenario included an enrollment service and a student service, the enrollment contained the id of the student and the consumers needed a list of enrollments with meaningful student details.
By using LINQ?s capability of joining two IEnumerables and the capability of creating projections "on the fly", what I first thought would take some time to build actually turned out to be a short but sweet pleasure.
This is (kind of) what I did:
List enrollments;
List students;
using (EnrollmentServiceClient client = new EnrollmentServiceClient())
{
enrollments = client.ListBy(date, testCenterId);
}
List<string> studentIds = (from item in enrollments
select item.StudentId).ToList();
using (StudentServiceClient client = new StudentServiceClient())
{
students = client.ListFor(studentIds);
}
List response = (from enrollment in enrollments
join student in students
on enrollment.StudentId equals student.Id
select new StudentEnrollment ()
{
FullName = student.FirstName + " " + student.LastName,
Code = enrollment.CertificationCode,
Certification = enrollment.CertificationName,
Status = enrollment.Status.ToString(),
InvoiceStatus = enrollment.InvoiceStatus.ToString()
}).ToList();
return response;
This is where I went from loving LINQ to worship it, it just solved an enterprise grade challenge for me with very little and very beautiful code.
In SO terms what it just created was an Entity Composition Service (ECS) which often are needed after applying the "decomposition principle" on your enterprise services. Basically what the principle says is that data should be tied to the service which implements the process that the data belongs to. Like in the above scenario where the services of handling student information and that of creating enrollments are logically and physically separated in the enterprise, the data then naturally lives in different corners of the enterprise and when creating lists like this it needs to be united.
This can be kind of hairy to pull off; luckily LINQ saved the day!
Technorati-taggar: LINQ, WCF, SOA
|
|
Comments (2) |
|
Having Fun with Anonymous types and Extension Methods in C# 3.0 |
Sunday, November 11, 2007 |
For version 3.0 of C# Microsoft have added features that makes the language just a little bit dynamic but still strongly typed. One of this feature is collection initializers for lists and dictionaries like we initialized arrays on earlier versions:
List<int> list = new List<int> { 1, 2, 3, 4, 5, 6, 7, 8, 9 }; Dictionary<string, string> dictionary = new Dictionary<string, string> { { "key1", "value1"}, { "key2", "value2"}, { "key3", "value3"}, };
I like this feature. But for Dictionaries where the key will be a string, wouldn't it be cool to do this instead:
Dictionary<string, string> dictionary = new Dictionary<string, string> { key1 = "value1", key2 = "value2", key3 = "value3" };
Fredrik (http://fredrik.nsquared2.com/) popped up on MSN with this idea earlier and I really liked it. After playing around with some C# 3.0 example code I came up with a almost as neat solution as the one proposed.
By utilizing extension methods, anonymous types and fluent interfaces this was the resulting usage:
Dictionary<string, string> dic = new Dictionary<string, string>() .Init(new { Key1 = "val1", Key2 = "val3" });
Arguable not as cool as the proposed syntax but still.
I've been playing around with implicit cast operator and some generics stuff and tried to make the syntax even more slick but haven't managed to do so just yet. Anyone got propositions for improvements? Here's my code:
public static class DictionaryExtensions { public static Dictionary<string, T> Init<T>(this Dictionary<string, T> dictionaryToInit, object initializationData) { if (dictionaryToInit == null) ThrowDictionaryNullException(); if (initializationData != null) { PropertyDescriptorCollection propertiesFromType = TypeDescriptor.GetProperties(initializationData); foreach (PropertyDescriptor propertyItem in propertiesFromType) { object value = propertyItem.GetValue(initializationData); if (IsOfType<T>(value)) { dictionaryToInit.Add(propertyItem.Name, (T)value); } else { ThrowExceptionForInvalidValueType(propertyItem); } } } return dictionaryToInit; } private static bool IsOfType<T>(object value) { return value is T; } private static void ThrowExceptionForInvalidValueType(PropertyDescriptor propertyItem) { throw new ArgumentException(string.Format("Value for key {0} is of invalid type for this dictionary", propertyItem.Name)); } private static void ThrowDictionaryNullException() { throw new ArgumentException("Can't initialize a null Dictionary"); } }
|
|
Comments (0) |
|
Encapsulation and Automatic properties in C# 3.0 |
Monday, September 10, 2007 |
C# 3.0 is just around the corner with all it's promises of glory and bridging between relational and OO semantics. In the after waves of the LINQ implementation a couple of cool new features are popping up. Some more cool then others (I still don't like the partial methods thingy, explained why here: https://lowendahl.net/showShout.aspx?id=132). When it comes to languages innovations for 3.0, just about every feature is implemented to support the LINQ scenarios. That is the case for automatic properties as well.
At first when I saw them I felt, "oh cool that will save me a lot of time". But then I thought about it and I'm not really that psyched any more and you can bet your retro gaming console that I will tell you why.
But first a short introduction to what automatic properties are. The idea is fairly simple, if you don't need any logic in your getters and setters you shouldn't really be forced to write them. This means that the syntax you have to write is quite simple:
public string Name { get; set; }
This just tells the compiler what you are after and the compiler will gladly generate the backing field and the code to write / read back and forth to the generated field. For a more technical description Bart de Smet has written an excellent post here: http://community.bartdesmet.net/blogs/bart/archive/2007/03/03/c-3-0-automatic-properties-explained.aspx
This is really convenient for another of the LINQ support features, anonymous types. When the types are generated they can utilize auto properties in the generation and voilá you got a class with properties.
Now the question is, does auto properties have any real value in any other scenario? I would argue that the answer to that question isn't technical but more about philosophy.
Before I make my argument I would like to explore the reasons why we encapsulate, for most it's something you do automatic. You've been pestered with the encapsulation mantra: "hide the access to the data so you can intercept the access or change how the data is stored". In most languages there are no native support for the encapsulation so you've often needed a special method like:
private string _name; public void setName(string value) {} public string getName() {}
This is usually added as a reflex since it's usually pretty hard to change from field access to set/get methods late in the project (a lot of search and replace if you don't got great tool support for it).
Since it's kind of a pain to add it later, encapsulation has been the thumb rule for a long time so long that I think Microsoft has forgotten about the reasons and by default reverted to "encapsulate everything" logic.
Encapsulating fields you don't need to hide right now is really a YAGNI violation. Very much so in C# and VB.NET. Since the syntax for accessing the get/set methods of an encapsulation is the same as reading/writing to a field we really don't have to worry about changing the implementation late in the project.
Can you (without intellisens) tell if this is a property or field:
currentCustomer.Name = "BillG";
Probably not. So if this is a field to begin with, it won't break any existing code to encapsulate it into a property later on.
Now back to automatic properties.
The idea is to simplify encapsulation, getters/setters, but it really misses the point. If I don't need interception, I don't need getters/setters. In that sense, automatic properties violates YAGNI. Since you can't access the backing field and you can't choose to make get or set automatic while implementing the other; automatic properties really don't bring more to the table then a public field would. (BTW, that is really a deal breaker for me, frameworks that access the field of a property for various reason will not be able to support automatic properties. Read Barts post for the details, hint: check the naming of the backing field)
So why are they there? Well since MS lives by the rule "encapsulate everything" it's frameworks can only work on properties. They have disqualified public fields as a viable way to read and write data to an object in their data binding solutions. As far as I know this is not a technical decision but a design decision, a design decision that breaks some of the most basic principles. It would be nice to get any comments from MS on the arguments behind this decision.
To sum up, I can understand where automatic properties comes from. It really simplifies encapsulation and developers don't have to write a lot of "dumb" encapsulation code just to support data binding and other features in the .NET Framework, but the idea that I needed to in the first place is really the issue here. .NET encapsulation is so similar to field access that the rule of "always encapsulate" is really out of date and MS should do something about their frameworks instead of inventing new language features.
|
|
Comments (15) |
|
GC Demo from MSDN Live |
Saturday, March 24, 2007 |
Yet Another Dispose Implementation
Attached to this post is the Dispose pattern demo I showed on the MSDN Live Roadshow for everybodys enjoyment. Key points from my and Johan's GC talk:
- GC promotes memory to different generations. That makes it possible that the application appear to leak memory. But before you cry wolf, make sure that it is not just that generation 2 has not been collected yet.
- Finilizers prolongs object life times. Finilizers will force objects to always get collected no sooner then in generation 1.
- IDisposable pattern is there to handle unmanaged resources, if your object only handles managed resources, you don't need it.
Also, Fredrik Normen (http://fredrik.nsquared2.com) tipped me off on this gotcha. What is wrong with this usage of using in the code below?
using( SqlCommand command = new SqlCommand("MySP", new SqlConnection("")) { //.. command.Connection.Open(); command.ExecuteNonQuery(); }
Yes you got it, the connection will not get disposed in this scenario since Command doesn't pass the dispose down to it's connection object.
Make sure that you understand what the dispose method actually does for objects you use in this way, sometimes you will need additional cleaning even when IDisposible is implemented. Reflector might help you find out (http://www.aisto.com/roeder/dotnet/).
|
Attached file: GCDemo.zip |
|
Comments (0) |
|
Debugging LINQ Queries |
Wednesday, March 21, 2007 |
Building applications based on LINQ, debugging is a must. In Visual Studio Orcas a lot of effort has been put down to give us a smooth experience when debugging queries. In addition to breaking on row where the query is created, it is also possible to break on separate parts of the query when the query is executed. The image below demonstrates how a breakpoint on a where clause could look like:
Since the query isn't executed where it is declared (a concept called deferred execution) the breakpoint will not be hit until you actually execute. In practice this means that when the query executes, the debugger will go back to the query definition and break the execution there.
The following image illustrates how the code execution really is at the "in" (There is a grey box around IN, kind of hard to see I know) keyword of the foreach statement but the debugger indicates the where statement as being the one currently executing.
This is something that will need some training to get used to. In the above example it's kind of easy to still follow the code. But in a more real world example it is bound to get a bit more confusing. What if the query is defined in another method in some other place in our domain?
Yes, the debugger will jump to that file and to the breakpoint at the query definition. The below example has two classes, program and gateway, the query is defined in the gateway class and program is what executes it.
So hitting "IN" (or any other place where the execution is triggered), will teleport you immediately to the Gateway.cs file and the creation of the query. Unorthodox and possibly very confusing since the code we really are looking at is the declaration of the query, not the execution of the same, and that might be totally out of the current execution context (i.e in a DAL when the real code is executing in the presentation layer).
All and all, the C# team have put in a lot of effort to get a smooth debugging experience and as we get used to this teleporting around, it might not feel as awkward as it currently looks.
|
|
Comments (2) |
|
Bringing a little bit of Ruby to C# |
Saturday, March 03, 2007 |
The last couple of weeks I have been playing around with ruby and ruby on rails. I really like the natural flow of the syntax in the language and got inspired to bring parts of that to C#.
In ruby you can write iterations like this:
4.times do
print "Yo!"
end
The above code snippet will execute "print "Yo!"" four times like a for statement. In addition to the simple times method there is also upto and step which will iterate to an upper bound and take other steps then +1. Have a look here http://www.ruby-doc.org/docs/ProgrammingRuby/html/tut_expressions.html if you want to learn more about ruby expressions.
This syntax is really neat, I want it in my everyday coding?
In C# 3.0 we have the ability to extend types with "Extension Methods" technology. Since I liked this syntax so much I decided to extend Int to have similar functionality. My first attempt was to mimic the "times" syntax. The built in Func<> type requires a return type and since I just want to execute the code, not handle any result, I first needed a simple void() delegate:
public delegate void Func();
The extension method was then simple to implement:
public static void Times(this int ubound, Func codeBlockToExecute) { for ( int index = 0; index < ubound; index ++) { codeBlockToExecute.Invoke(); } }
(If you are unfamiliar to extension methods, this is good reading: http://www.interact-sw.co.uk/iangblog/2005/09/26/extensionmethods )
Now armed with this method we can utilize that the C# compiler handles numbers in text as integers by default and use our Times function:
5.Times(delegate { Console.WriteLine("Yo!"); });
Not as beautiful as the original, the delegate keyword kind of takes away the grace a bit, but still functional enough. Using lambda it is possible to make it a bit more slick but not as flexible since lambda in C# only supports the execution of a single line of code, but still have a look at this:
5.Times( () => Console.WriteLine("Yo !"));
Attached to this post (you need to go to the website to get it) is the code for this and other ruby like extensions including the "step" and "upto" ones. The code is written and compiled on the Orcas march CTP.
Playing around with this and some other extension a bit I got a wish list for the C# team:
- Func delegate with no return parameters (I know, I am lazy)
- Extension Properties ( 5.Times.Do() would really be cool to do)
- Handle {1,2,3} alone on a line as an array, so we could do {1,2,3}.foreach.do()
I guess I will pester the team about it when I get to Redmond in a couple of weeks.
|
Attached file: RubyLikeExtensions.zip |
|
Comments (7) |
|
More List functions |
Friday, January 20, 2006 |
List really bring alot to the table. Except the methods I showed in an earlier post, it also has it's own ForEach implementation (http://msdn2.microsoft.com/en-us/library/bwabdf9z.aspx) which takes a delegate to a function, similar to the Predicate one for Exist, FindAll etc.
Now why would one use a specific foreach implementation? Well not counting the hype factor of being able to do this:
List.ForEach( delegate (T obj) { DoSomthingWithObj(obj); } );
It also turns out that it performs better then an ordinary for each operation. John Skeet blogged about it here http://msmvps.com/blogs/jon.skeet/archive/2006/01/20/foreachperf.aspx
|
|
Comments (0) |
|
More C# 2.0 usages (generics, anonymous methods) |
Wednesday, January 18, 2006 |
In an earlier post I talked about a simple scenario where we could use generics. In this post I'll expand a bit on that and look at the List class and what you can do with generics in cooperation with anonymous methods and delegates.
The generic List class is the successor of the ArrayList class. The former was weakly typed and could only store System.Object which led to runtime casting and boxing scenarios.
The List class helps us out here and creates a strongly typed list. I assume you're familiar with the basics of generics so consider the following code:
// See the attached download file for complete code. public class Customer { string Name; string Company; decimal YearlyOrderAmount; }
. . .
List listOfCustomers = CustomerRepository.List();
Now we have a strongly typed list of customers in our application code. With the help of the generic type in the list, Microsoft has been able to create a couple of helpful methods on the List class. Methods that use the generic type together with generic delegates to give us some really nice features. A couple of thoose methods are listed below:
public bool Exists (Predicate match) public ListFindAll (Predicate match) public T FindLast (Predicate match)
The Predicate delegate is a simple delegate that points at methods that accepts one single parameter of the type T and returns a boolean. The definition of the Predicate delegate is as follows:
public delegate bool Predicate (T obj)
The methods above use this delegate to execute code that will decide if an object is a match or not.
Let's start off with a simple example on how to use it with regular delegate syntax:
Predicate matchDelegate = new Predicate(AllWithAInTheirName); . . .
public bool AllWithEInTheirName(Customer customerToMatch) { return (customerToMatch.Name.ToLower().IndexOf("e") >= 0) }
We can now use the delegate object in our FindAll method to filter out all Customers with an E in their name:
List customersWithAnA = listOfCustomers.FindAll(matchDelegate);
For every object in the list, the FindAll method will execute the predicate delegate and pass it the object to determine if it matched our criteria or not.
Now for the really dynamic bit :) How about using some anonymous delegates in this scenario?
List customersWithE = customerList.FindAll( delegate(Customer customerToMatch) { return (customerToMatch.Name.ToLower().IndexOf("e") >= 0); }
);
In this piece of code we use the C# 2.0 ability to create a method on the fly which helps us decide the rules of a match on the spot where it's needed. Although not the most beautiful code in the world, this is really flexible and allows us to quickly set up new filter options for other use cases without the need for creating methods and put them in a class somewhere.
Download the code here Predicate.Zip [1k]
|
|
Comments (1) |
|
Generics, a simple usage for OOP |
Friday, January 13, 2006 |
A simple example where I find generics very useful. Imagine the scenario where you have a abstract base class that defines a create method. The create method will be implemented by several sub classes and will return a single entity object.
In C# 1.0 / 1.1 we had to rely on type casting to make this happen. Like so:
public abstract class Entity { public int Id; }
public abstract class Factory { public abstract Entity Create(); }
public class Customer : Entity { public string Name; }
public class CustomerFactory : Factory { public override Entity Create() { return new Customer(); } }
Not really an optimal solution but it has worked so far.
A more suitable solution is to use generics in our base class and define the generic type in the sub class when deriving. Somthing like this:
public abstract class Entity { public int Id; }
public abstract class Factory where typeToCreate : Entity { public abstract typeToCreate Create(); }
public class Customer : Entity { public string Name; }
public class CustomerFactory : Factory { public override Customer Create() { return new Customer(); } } Now with that simple little change in our base class, we managed to avoid any semi-typed return values and eliminate type casting for our Factories clients.
|
|
Comments (1) |
|
Mixins and .NET |
Friday, September 16, 2005 |
One of the things that has been circulating around as a rumor the last couple of months is the implementation of Mixins into the .NET runtime
For those of you who don't know what a mix in is; it's basically interfaces with default implementation of some of it's virtual methods. This approach could probably be the solution for most of the cases where multiple inheritance would be a good idea, but not getting the diamond effect.
Now there has not been much substance in any of the rumors nor has anyone been able to get a straight answer; until today.
On my list of missions at the PDC, finding this out was one of my top priorities. So at the Ask The Experts session yesterday I talked a little bit about it with Shawn Wiltamuth from the C# language team. The story he told was that the C# team looks really positively about it but haven't started to look at the implementation yet. One of the key issues he thought was how deep in the CLI this should be implemented. His final statement was "we're carefully observing the progress around the research on this matter".
That was yesterday, today the same question was asked to the CLR team and the picture cleared a bit more. The CLR team is really psyched on getting this to into the VM and have given it some thought. Although, they stated, they have hade some resistance from some language designers that clearly didn't want this kind of complexity in their language.
Now putting two things together a pretty nice story takes form. The CLR guys want it, the C# team think it's a good idea, now who could be left?
|
|
Comments (1) |
|
Turn your back on the darkside ... |
Monday, June 20, 2005 |
... and give your applications a well deserved estetic, functionality and productivity upgrade.
Download the new version of the VB->C# converter today, don't hesitate, I know you want to..
http://www.vbconversions.com/
|
|
Comments (1) |
|
How bad is string concat really? |
Sunday, June 19, 2005 |
So everybody has probably heard that string contatination is a bad thing and you should use the string builder for as many string operations as possible?
If you already know the rational behind this you can skip this section and jump down to "the proof"
The Concept
This is due to the fact that strings are immutable. Immutable in laymens terms mean "can't be changed". So strings can't change!? But this can't be right? We do changes to strings all the time.
The CLR knows about the immutability offcourse, which it compensates for. It will create a new string for you everytime make a change.
That means that the following syntax:
string message = string.Empty;
message += "Hello ";
message +=" world!";
Will actually create three strings in memory but we will only hold a reference to one of them. The two others are lost to us and are ready for garbage collection anytime from now and when the moon falls down.
This isn't too bad is it?
Well it kind of is.
The Proof
To illustrate how bad it can be I prepared a little test. The test runned on a Virtual PC image with Windows 2003 server and .NET Framework 1.1. I used CLR Profiler to examine the memory used by the test application.
This is the simple piece of code I tested:
static void Main(string[] args)
{
string outputstring = string.Empty;
for (int i = 0 ; i < 10000; i++)
{
outputstring += i.ToString() + "\t";
}
Console.WriteLine(outputstring);
}
A real hardcore bussiness like case ;). Anybody dare to guess how much memory this application used?
479mb!
Here's some images for you:
image 1 memory usage
image 2 method memory usage
Why is this? As I stated earlier, every concatination will create a new object, so in our scenario we will have created at least 10 000 object with various sizes from 0 to ~100k. This will eat up our heap rather quickly.
The solution to this problem is offcourse the stringbuilder, so let's have a look at a similar test where we use a stringbuilder object as our data structure.
The second test looks like this:
static void Main(string[] args)
{
StringBuilder sb = new StringBuilder(49000);
for (int i = 0 ; i < 10000; i++)
{
sb.Append(i.ToString());
sb.Append("\t");
}
Console.WriteLine(sb.ToString());
}
When the profiler examines this piece of code, the result will be this:
image 3 memory usage with a stringbuilder
image 4, method memory usage
I will take 474kb (96k for the string) over 479mb (457mb for the strings) any day, how about you?
The Conclusion
Stringbuilder is a good idea in almost any case where we create dynamic strings. The above example used a silly test to exaggerate the effect but this could be a real scenario in an asp.net application where 1000 users do the concatination instead.
After doing this test and some others I'm now a hardcore strinbuild fan and will probably use it almost anywhere where string concatination is an issue.
|
|
Comments (12) |
|
Naming conventions |
Monday, April 25, 2005 |
On the train back from Gothenburg to Stockholm a discussion about naming conventions emerged.
According to Microsoft recommendations; Hungarian notation and abbreviation is out and descriptive names are in. This is also confirmed in Code Complete (2nd edition) where descriptive names are really a big thing. Descriptive names are all about proclaiming what the member actually is used for.
So I'll be implementing those recommendations from now on:
TextBox ATextBoxForUsersToEnterFirstName;
You like?? ;)
|
|
Comments (5) |
|
When and how to access properties and fields |
Friday, January 28, 2005 |
Recently on the news groups I encountered a question about access modifiers in C# 2.0; when and how to differentiate set and get modifiers. This actually brought a bigger issue into the daylight.
Field encapsulation The question at hand was if one should directly access member fields exposed as read only properties or if one instead should use the internal modifier for the set part of the property.
Properties are the C# way to encapsulate the access to private object data, which is a central part of OO. Encapsulation fulfills two main purposes;
- You hide the implementation details of the data storage from the object user.
- You can att any time enforce strategies or business rules on the data reading / writing without the object user knowing about it.
So with these two points in mind let's explore our two implementation options for the publicly read only property.
// Option 1;
public class Course
{
private string number;
public string Number
{
get { return this.number; }
internal set { this.number = value; }
}
public Course(string number)
{
this.Number = number;
}
}
// Option 2;
public class Course
{
internal string number;
public string Number
{
get { return this.number; }
}
public Course(string number)
{
this.number = number
}
}
So what is the actual difference between these two code snippets? Well option 2 allows for code that resides inside the same assembly to bypass the encapsulation and directly touch our object data.
In my opinion this breaks encapsulation, bad. This in turns breaks good OO design. If we've decided to encapsulate a field, we've encapsulated that field. Since option 2 don't have a single point where we could inject strategy and / or business rules; nor could we change the data storage; we might be stuck with a situation where a simple refactoring would force us to put some extra code around our domain.
If your thoughts now are; "Well my object data never actually need this protection, it's safe for me to touch the data from wherever". Maybe you should rethink your OO design. Maybe properties and encapsulation aren't the solution for you.
I always make it a rule of thumb that "If a member field needs encapsulation, all access to that field should pass through the encapsulation logic".
|
|
Comments (4) |
|
The forgotten implicit operator |
Tuesday, January 18, 2005 |
A couple of months ago, one of my students made an evil remark about the Int32 data type and how it wasn't possible to actually inherit from it.
The problem he had was to present golf handicaps in a nice way. He wanted to override the ToString method to format the value held by the integer by some golf rules.
He didn't like the idea of embedding an integer into another struct and expose it through a Value property.
The solution: an implicit cast operator. With simple types and the implicit cast operator, it's fully possible to actually receive a value without the need for a public property to be set.
Implicit casting is when you cast from one data type to another without any operator needed.
e.g;
int i = 42;
long l = i;
This will implicitly cast the i variable into a variable of type long. So how does this information help you?
Well in creating an implicit cast operator, you can decide what would happen if you try to assign an integer or a long to your own struct.
public struct Hc {
private double hc;
public Hc(double hc)
{
this.hc = hc;
}
public static implicit operator Hc(double value)
{
return new Hc((double)value);
}
}
This will give us the ability to easily write:
Hc hc1 = 37.13;
And equally easy, you can now add the ToString override;
public override string ToString() {
string format = "{0}";
if ( this.hc > 36 )
{
format = "{0:00}";
}
return string.Format( format, this.hc );
}
Now just add some overloading for the +, - and == operators to your struct and you got a fully fethered simple data type.
There's an working example in my code zone.
|
|
Comments (1) |
|