Skip to content

CategoryDatabases

ORM vs TSQL (Benefits / Drawbacks)

I think this is a subjective matter, as there’s strong arguments for both. In my experience, I prefer using ORMs for simple applications in a small team. For enterprise apps with many developers, nothing beats the simplicity and performance of pure TSQL (or SQL for that matter). Yes, there could be a lot of boiler plate code, but that part can be automated.

And of course, doing both is still an option.

ORM

Benefits

  • Potentially less coding for accessing database data and basic CRUD operations.
  • Create a class and it writes out your database schema
  • Caching facilities are available for often-accessed data
  • Entity Framework is available to script out Database tabes to c# objects.
  • ORM syntax may generate database-independent code

Drawbacks

  • Does not use SQL, so new syntax has to be used for CRUD/maintenance operations.
  • Could be harder to debug since the SQL is abstracted. Have to run SQL profiler to view what gets actually generated.
  • May not be able to do everything that TSQL can do.
  • Have to know SQL anyways for things that ORM cannot do.
  • Because of its implicit caching nature, when dealing with huge amounts of data, caching may not be ideal (too much unnecessary caching)

TSQL

Benefits

  • Know exactly what you’re getting, since there’s no middle layer writing out your SQL
  • Potentially faster SQL and no need to go through the translation engine.
  • Every developer knows SQL – it’s most widespread
  • SQL is cross-platform (unless using specific Transact SQL commands)
  • No barriers to optimize TSQL
  • Instantly/efficiently troubleshoot with SQL Sever’s native tools (Management Studio/Query Analyzer)
  • SQL Server objects could be used cross-platform. A TSQL UDF, View, Stored Procedure could be easily used by any application, regardless of the platform: e.g. a PHP/Java/Ruby/CF app can use the same SQL Server Stored Procedure that a C# app is using.
  • Compiled stored procedures (written in SQL) are faster than ORM syntax.

Drawbacks

  • May not implicitly cache data like ORM layer, but can be cached with traditional C# code.
  • More TSQL/C# code may have to be potentially written using traditional TSQL and C#.
  • More advanced TSQL code may be heavily dependent on SQL Server, and may make database migration more challenging.

Better SSMS Objects Management with SQLTreeo

This is so cool and handy. You can hide and organize database objects (procs, tables, views, databases) in folders. This organization is saved locally to your machine, on a the server level, via extended properties. If you decide to save the info on the server, someone else who has the extension will view it organized accordingly.

Check it out.

Importing Excel Data into SQL Server

Sometimes it can be a pain. Why? Because SSMS (or should I say SSIS) uses the Windows Excel driver, and it infers data types of the columns incorrectly, and will choke when you import. Sometimes it will work, sometimes not. So you have to import it as text. However, first you have to re-save the Excel file as a CSV (which will by default use commas as delimiters, and quotes for qualifiers).

So…

Step 1:
In Excel, save the file as a CSV, in this case, file.csv.

Step 2:
In SSMS, under the Import wizard, choose “Flat File Source:

Step 3:
Leave everything default, except use a one quote in the qualifier textbox:

STEP 4:
In the Advanced section, select all the columns and make sure the OutputColumnWidth is something big enough, larger than the size of the largest column. In this case, we’ll make it 150. Make sure the DataType is string [DT_STR]

Step 5:
Leave the rest of the settings except the target database/table where you’re importing, and Finish the import.

Step 6:
Done.

Who’s Locking The Table?

This is the script I use to view who/what is locking the table in SQL Server:

SELECT  l.request_session_id AS SessionID,
        o.Name AS 'Table Locked',
        DATEDIFF(second, at.Transaction_begin_time, GETDATE()) AS 'Duration (Seconds)',
        FORMAT(at.Transaction_begin_time, 'yyyy-mm-dd h:mm:ss tt') 'Started On',
        es.login_name 'User',
        es.[program_name] 'Program',
        COUNT(*) AS Locks
FROM    sys.dm_tran_locks l
        JOIN sys.partitions p ON p.hobt_id = l.resource_associated_entity_id
        JOIN sys.objects o ON o.object_id = p.object_id
        JOIN sys.dm_exec_sessions es ON es.session_id = l.request_session_id
        JOIN sys.dm_tran_session_transactions st ON es.session_id = st.session_id
        JOIN sys.dm_tran_active_transactions at ON st.transaction_id = at.transaction_id
WHERE   resource_database_id = DB_ID()
        AND o.Type = 'U'
GROUP BY at.Transaction_begin_time,
        l.request_session_id,
        o.Name,
        es.login_name,
        [program_name]

Restoring an MDF/LDF from Another Machine

If you need to restore an MDF/LDF from one machine (source) into another machine (target), you copy the source’s MDF/LDF file to the target machine. Then start the target server in single user mode. Then run SSMS as Administrator on the target machine and restore the files. If you don’t run it as Admin, you’ll get the error listed here:

http://stackoverflow.com/questions/2330439/access-is-denied-when-attaching-a-database

http://stackoverflow.com/questions/4661013/sql-server-attaching-database-access-denied

MongoDB Console Tip: Better Display of Object on Console

Quick tip. This is just a neat way to better display the JSON from the console. For example, if I did this in one of my dbs:

db.client.find()

I would get:

{ "_id" : ObjectId("4da45b3cf980ed161462c2ef"), "ContactEmergencyDetails" : { "awesome" :
[ 1111, 2222, 3333 ], "cool" : "awesome", "dan" : [ 4.5, 5.322 ] }, "firstname" : null, "lastname" : null }

But if I changed it to this (converting it to Array):

db.client.find().toArray()

It formats it better:

[
  {
    "_id" : ObjectId("4da45b3cf980ed161462c2ef"),
    "ContactEmergencyDetails" : {
      "awesome" : [
                    1111,
                    2222,
                    3333
                ],
        "cool" : "awesome",
        "dan" : [
                  4.5,
                  5.322
                ]
  },
  "firstname" : null,
  "lastname" : null
  }
]

Dapper – A C# Micro ORM Library Makes Data Access a Snap – Part 1

I’ve recently gotten a chance to try out Dapper – a micro ORM library by Sam Saffron, which originally spawned from StackOverflow’s codeset. If you’re looking for an extremely light-weight library to access your SQL Server, I highly recommend it. It focuses more on convention than configuration. It also does not have the overhead that EntityFramework or NHibernate have, and it emphasizes speed and ease-of-use.

Dapper is a collection of Extension Methods that enhance the Connection object (or rather extends the IDbConnection interface). It also embraces the use of inline SQL in your code, which is useful when retrieving sets that are less trivial. With its buddy, Dapper-Extensions by Thad Smith, it makes DB interaction even easier.

Both of these libraries can be downloaded from NuGet, and you’ll need them to run the following code.

There’s really just one thing that I found irritating about the Dapper project: the documentation. While there is documentation found at its Google Code home page, I find it very lacking.

Anywhoot, to get started first we need to create our table. Once we have that and add some data, we’ll use the Dapper to extract the data and map to objects.

----------------------------------------
-- Customer Schema
----------------------------------------
CREATE TABLE [dbo].[Customer](
[CustomerID] [uniqueidentifier] NOT NULL,
[FirstName] [nvarchar](50) NULL,
[LastName] [nvarchar](50) NULL,
[Age] [int] NULL,
[IsAllowed] [bit] NULL,
[DateRegistered] [datetime] NULL,
[AdditionalDetails] [nvarchar](max) NULL,
CONSTRAINT [PK_Customer_1] PRIMARY KEY CLUSTERED
(
[CustomerID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[Customer] ADD  CONSTRAINT [DF_Customer_CustomerID]  DEFAULT (newid()) FOR [CustomerID]
GO

----------------------------------------
-- Data
----------------------------------------
INSERT INTO [dbo].[Customer]([CustomerID], [FirstName], [LastName], [Age], [IsAllowed], [DateRegistered], [AdditionalDetails])
  SELECT N'5458dff8-cea2-4bdb-9431-1dea56f109f8', N'Bruce', N'Wayne', 31, 1, '20080604 08:13:44.000', N'I am Batman' UNION ALL
  SELECT N'1767bc57-af03-4b17-891f-2aa9af244180', N'Peter', N'Parker', 25, 1, '20120103 12:22:28.000', N'I am Spider-Man' UNION ALL
  SELECT N'dc2bf42d-045b-4189-bd6b-8bf1bf120291', N'Eddie', N'Brock', 34, 0, '20040123 02:45:41.000', N'We are Venom'

OK, so this is what we have so far:

Because Dapper focuses more on convention over configuration, it’s important to note that when you create your class:

  • The name of your class must match the name of the table, otherwise an exception is thrown.
  • All column names must match all class property names, otherwise an exception is thrown.
  • If a column is not used (in NOT written out in the SELECT statement) and the class property exists, it will set to NULL or assign a default value – more on this later.

Now that we’re aware of these points, let’s see the code to get started:

using System;
using Dapper;
using System.Data.SqlClient;

class Customer
{
  public Guid CustomerID { get; set; }
  public string FirstName { get; set; }
  public string LastName { get; set; }
  public int Age { get; set; }
  public bool IsAllowed { get; set; }
  public DateTime DateRegistered { get; set; }
  public string AdditionalDetails { get; set; }
}

class Program
{
  private static void Main()
  {

    using (SqlConnection conn = new SqlConnection("Data Source=NARUTO;Initial Catalog=TESTDATABASE;Integrated Security=True"))
    {
      conn.Open();

      // Put dapper code here.

      conn.Close();
    }

    Console.ReadLine();
  }
}

OK, so there’s nothing out of the ordinary here. We’re creating the class we want to map our data to, creating a Connection object, opening, closing, and then disposing (via using). Now let’s suck up some data within our Main() method:

private static void Main()
{
  using (SqlConnection conn = new SqlConnection("Data Source=NARUTO;Initial Catalog=TESTDATABASE;Integrated Security=True"))
  {
    conn.Open();

    IEnumerable customers = conn.Query("SELECT * FROM Customer");

    foreach (Customer c in customers)
    {
      Console.WriteLine("CustomerID: " + c.CustomerID);
      Console.WriteLine("FirstName: " + c.FirstName);
      Console.WriteLine("LastName: " + c.LastName);
      Console.WriteLine("Age: " + c.Age);
      Console.WriteLine("IsAllowed: " + c.IsAllowed);
      Console.WriteLine("DateRegistered: " + c.DateRegistered);
      Console.WriteLine("AdditionalDetails: " + c.AdditionalDetails);
      Console.WriteLine("-----------------------------------------------");
    }

    conn.Close();
  }

  Console.ReadLine();
}

As we see, the Query extension method pulls in the data based on the select statements. The Query method returns a strongly-typed list of Customer. You should see the following:

Now what if we want to get the number of records in the table? Well, because Query returns an IEnumerable, we cannot use the Count property. To do so, we convert to a List:

List customers = (List)conn.Query("SELECT * FROM Customer");
Console.WriteLine(customers.Count);

That should give us the count. To see how the data types were mapped, we can take a look at the Dapper source code, SqlMapper.cs inside the static method SqlMapper(). For reference, you can take a peek at these two sources to understand SQL Server/CLR/.NET framework data types:

http://stackoverflow.com/questions/425389/c-sharp-equivalent-of-sql-server-2005-datatypes

http://msdn.microsoft.com/en-us/library/ms131092.aspx

For our example though, we saw that the data types were mapped:

Mismatches and Nulls

In the query that we ran, we were making the assumption that all columns in the db match all the properties in the class. What if we have a mismatch? What if we have this?

IEnumerable customers = conn.Query("SELECT LastName FROM Customer");

Notice that we’re only retrieving the LastName, while we have other properties in our Customer class. Well, this is what you would see:

So from the above, we notice that Dapper will handle Nulls by:

  • Setting strings as Nulls
  • Setting a Guid to 00000000-0000-0000-0000-000000000000
  • Setting an int to 0
  • Setting bool to false
  • Setting a DateTime to 1/1/0001 12:00:00 AM

Which will all happen when Dapper cannot find a class property to map to from the column value. Also, if you column name that DOES NOT match a property, it will throw a SqlException of “Invalid Column Name”.

Parameterization

What if we want to pass a parameter into our SQL? Let’s pass a parameter:

IEnumerable customers = conn.Query("SELECT * FROM Customer WHERE AGE > @Age", new { Age = 30 } );

Notice that I’m passing in an anonymous object. I could also have done:

Customer someCustomer = new Customer {
Age = 33,
FirstName = "Clark Kent"
};

IEnumerable customers = conn.Query("SELECT * FROM Customer WHERE AGE > @Age", someCustomer );

Which yields:

The property in someCustomer (FirstName = “Clark Kent”) is silently ignored because it’s not used in the SELECT statement.

When the above is run, SQL Server Profiler shows that the following is run:

exec sp_executesql N'SELECT * FROM Customer WHERE AGE > @Age',N'@Age int',@Age=33

So we know it’s parameterized.

To call a stored procedure and map it to objects, we do, assuming we have the following sproc:

CREATE PROCEDURE GetCustomers
(
@IsAllowed BIT
)
AS
BEGIN

SELECT * FROM Customer
WHERE IsAllowed = @IsAllowed

END

We do:

// The second @ token does not have to be named "@IsAllowed" and can be named anything as
// long as the C# property in the anonymous object is named the same thing.
IEnumerable customers = conn.Query("GetCustomers @IsAllowed = @IsAllowed", new { IsAllowed = 33 } );

Also accepted:

IEnumerable customers = conn.Query("GetCustomers @IsAllowed = @Allow", new { Allow = 33 } );

Or you can be more explicit by specifying the command type (StoredProcedure):

IEnumerable customers = conn.Query("GetCustomers", new { IsAllowed = 33 }, commandType: CommandType.StoredProcedure );

Or writing it a more secure way:

DynamicParameters parameters = new DynamicParameters();
parameters.Add("@IsAllowed", 'f', dbType:DbType.Int32);

IEnumerable customers = conn.Query("GetCustomers", parameters, commandType: CommandType.StoredProcedure );

To insert data, we do as follows:

Customer customer = new Customer{
  FirstName = "Mazinger",
  LastName = "Z",
  Age = 40,
  IsAllowed = true,
  DateRegistered = DateTime.Now,
  AdditionalDetails = "Metallic defender"
};

conn.Execute(@"INSERT INTO Customer
                                  ( FirstName ,
                                  LastName ,
                                  Age ,
                                  IsAllowed ,
                                  DateRegistered ,
                                  AdditionalDetails

                   )
                   VALUES  ( @FirstName ,
                                 @LastName ,
                                 @Age ,
                                 @IsAllowed ,
                                 @DateRegistered ,
                                 @AdditionalDetails
                               )"
                 , customer);

This ends part 1 of the tutorial. I will post the 2nd part soon.

Search For Column Name in All Databases

Whipped up this little script today to search for a column in all my databases.

--------------------------------------------------------------------------------------------------
-- Search for the column name
--------------------------------------------------------------------------------------------------
DECLARE @searchFor AS VARCHAR(50)     = 'CustomerID' -- keyword to search for
DECLARE @databaseList AS VARCHAR(MAX) = 'Database1,Database2,Database3,Database4' -- comma delimited

DECLARE @orderBy AS VARCHAR(50)       = 'DateCreated ASC' -- You can also append a space and the ASC/DESC
-- Order By One of These
-- ObjectID, ColumnName, Table, Database, Date Created, Modified Date
--------------------------------------------------------------------------------------------------

DECLARE @sql NVARCHAR(MAX) = ''
-- loop
SET @databaseList = @databaseList + ','
DECLARE @pos AS INT
DECLARE @val AS VARCHAR(255)

WHILE PATINDEX('%,%', @databaseList)  0
BEGIN       
  SELECT  @pos = PATINDEX('%,%', @databaseList)
  SELECT  @val = LEFT(@databaseList, @pos - 1)       
  SELECT  @databaseList = STUFF(@databaseList, 1, @pos, NULL)
  SET @sql = @sql + 
             'SELECT  *
              FROM    ( SELECT  portalColumns.OBJECT_ID AS ObjectID ,
                                portalColumns.[NAME] ColumnName ,
                                ''[' + @val + '].dbo.['' + portalTables.[NAME] + '']'' AS [Table] ,
                                ''[' + @val + ']'' AS ''DatabaseName'',                  
                                create_date AS ''DateCreated'' ,
                                modify_date AS ''ModifiedDate''
                        FROM    [' + @val + '].sys.columns portalColumns
                                INNER JOIN [' + @val + '].sys.tables portalTables ON portalColumns.object_id = portalTables.object_id
                        WHERE   portalColumns.name LIKE ''%' + @searchFor + '%''
                      ) AS Results
              UNION '                       
END

SET @sql = LEFT(@sql, LEN(@sql) - 5) + ' ORDER BY ' + @orderBy

--PRINT @sql    
        
EXECUTE sp_executesql  @sql

Upgrading from SQL Server 2000 to 2008

First have a look at Microsoft Whitepaper that gives you an overview of the process. Make sure you get the SQL Server 2008 Upgrade Advisor. It will analyze databases and check for incompatibilities. It does not write any data to database you’re analyzing. I highly recommend it, if you’re making such a big jump as we did. Also, apply SP1 (currently the newest), and Windows updates.

The machine I’m working on is a 64bit machine with 16 GB RAM. Make sure you adjust the max memory taking up by SQL Server. By default, it will try to consume the max you have (the setting will be set at roughly 2 petabytes). To prevent OS starvation of memory, on a machine used for just SQL Server, allot 10-15% of memory to the OS. The rest will be for SQL Server. With 16GB, I set it to 13GB (13312 MB) of memory.

111

Also, I urge to change the compatibility level to SQL Server 2008 (100), that is of course, the Upgrade Adviser throws a lot of issues and you’re on a tight deadline. Certain features have been deprecated and removed. For example, this will no longer work on 2008 (order by [table alias].[column alias]):

select AreaID aid, ParentID, Name 
from Areas a
order by a.aid

or this way of truncating will no longer work:

To avoid problems for future upgrades, it’s best to switch the compatibility to 2008.

Also, as an obvious reminder, keep a backup, especially if you’re going to do a detach/attach migration. Once you attach the 2000 db files to 2008, you can’t retach to 2000. I prefer a traditional .bak restore.

Lastly, I highly recommend this checklist when upgrading.

SSIS Data Import from Excel

I’ve always found the SSIS import tool a bit clunky. I tried to import a CSV file and it get crapping out before completing.

– Copying to [dbo].[export] (Stopped)

Messages

  • ·Error 0xc02020a1: Data Flow Task 1: Data conversion failed. The data conversion for column “Email Address” returned status value 4 and status text “Text was truncated or one or more characters had no match in the target code page.”.
    (SQL Server Import and Export Wizard)
  • ·Error 0xc020902a: Data Flow Task 1: The “output column “Email Address” (10)” failed because truncation occurred, and the truncation row disposition on “output column “Email Address” (10)” specifies failure on truncation. A truncation error occurred on the specified object of the specified component.
    (SQL Server Import and Export Wizard)

I sifted through the data, only a few hundred rows, and I could not see anything wrong with it. I made the columns the right type and increased the column size more than the largest number of characters in the fields. I searched Google and Microsoft Forums trying a handful of solutions and nothing worked.

So what did I do? I had enough and just opened the file from CSV, into Excel, resaved it as an Excel 2007. Re-ran the SSIS import wizard to open an Excel file instead of a CSV, and voila. I’m still baffled though, as to what exactly the problem is.

Scripting Out SQL Server 2000 Database Objects from SQL Server 2008 Management Studio

If you’re still stuck working with a SQL Server 2000 database, there’s no reason why you shouldn’t be using SSMS 2008 R2. There’s lots of great add-ons for it and provides a rich set of features, aside from merging Query Analyzer and Enteprise Manager.

One thing to watch out for, is that if you script out your database objects, while you’re connected to SQL Server 2000, by default, it will script out TSQL that is compatible only with SQL Server 2008. To switch this, there are two ways:

Tools -> Options -> SQL Server Object Explorer -> Scripting

Database -> Tasks -> Generate Scripts

Delete Duplicate Rows

If you’ve ever tried to delete two rows that are identical from SSMS, you may come across this little beauty:

To get around this, you have to delete 1 row at a time and use SET ROWCOUNT. So you would do something like

SET ROWCOUNT 1
GO
DELETE FROM users_to_optin where email = 'someone@yahoo.com'

That would delete one record at a time, even if there were multiple instances of ‘someone@yahoo.com’.

After you’re done, make sure you set the rowcount back to 0, which turns the limitation off.

SET ROWCOUNT 0
GO

Check out more about ROWCOUNT.

Renaming a SQL Server Table (Watch Out)

Sometimes while testing on your local db, one has the habit (I admit to have this in the past) of renaming a table before replacing it. One does this as a quick backup. The problem with this strategy is that it doesn’t rename all of the dependencies. For example, if you had a table formsClients and renamed it formsClients_old, and then restored the formsClients table again (let’s say from an SSIS import), you’ll get something like this:

If we inspect the dependencies, we’ll see that in this case, it’s a foreign key constraint:

NULLs in MongoDB

By default, the C# driver’s serializer takes a null value and assigns a default, rather than just writing the word NULL in the collection when inserting.

So if you had a an object that had 50 properties, only set 1 value in it, and passed it to MongoDB, it would insert all 50 properties, which would be wasteful (depending on how you looked at it).

One can decorate the property with the [BsonIgnoreIfNull] attribute and will not write that element to the db if it’s null. However, if you don’t always want to use this property, one can set it once (and forget it) by creating a MongoDB “ConventionProfile” across the entire application. We have this setup in the Global.asax.cs so it gets created when the application is loaded:

var profile = new ConventionProfile(); 
profile.SetIgnoreIfNullConvention(new AlwaysIgnoreIfNullConvention()); 
BsonClassMap.RegisterConventions(profile, t => true);

However, even with that, it only appears that it does this only for strings that are null. If you have a property that’s Boolean, Integer, or Date, it will still write the elements to the database, setting Booleans to Falses, Integers to 0, and Dates to an ISO timestamp. It does this partly because these data types don’t implement a NULL property, so they can’t be null. To “make” them NULL, you have to use the question mark after the data type, like so:

bool? IsEnabled;
int? MagicNumber;
Date? TimeStamp;

All three are short for:

Nullable isEnabled;
Nullable MagicNumber;
Nullable TimeStamp;

Upgrading to SQL Server 2008 (Part II)

Sometime ago, I published some tips on upgrading to 2008. Here are some additional notes to keep in mind.

For upgrading, my plan of attack in the past has been as follows:

1. Run the Upgrade Advisor on all the databases (this doesn’t require much effort, just download, install, point to the db and run)

2. On production, do a custom install and only install what you need. You can do a side-by-side install and have SQL Server 2008 installed while SQL Server 2000 exists on the same server.

3. Analyze the Advisor’s reports and refactor where SQL objects that need to be refactored

4. Refactor TSQL as needed

5. Now, TSQL isn’t just in the DBs, it may also exists in the app code and other web files (.aspx, etc.). This is where it gets more challenging. Check to make sure if the analyzer can open .trc files. If so, run SQL Server profiler while the site is being used and those queries are being run. Then open that .trc file in the analyzer so it can check the validity.

If this doesn’t work, log the profiler results to a table. Then put the TSQL from output of the profiler in a sproc. Point the analyzer to that sproc to check the syntax.

You can automate this task by using Selenium or write a script using WGET to fetch and look for 500 HTTP error codes.

6. Refactor TSQL as needed.

7. Test the websites with 2008. Ideally you want to start testing with the latest compatibility. Having a lesser level of compatibility causes more overhead (since it has to support older features) and can allow the use of deprecated features that MS will abandon in the next release.

9. Optimize the settings for db server. Strip out all dbs you don’t need to avoid security problems, like: Northwind, AdventureWorks.

10. Before pushing it live, make sure you backup all dbs, including all system dbs.

11. As soon as the switch is flipped, keep an eye out on perform/SSMS monitor. Have handy the appropriate DMVs to monitor.

12. Even if the application(s) did not throw errors while testing on dev, there’s a chance that once you go to production, it starts throwing errors due to compatibility (maybe the more obscure pages were not tested). Be ready to flip the compatibility mode to 2000 – this takes place instantly.

Validating Checkboxes in ASP.NET MVC

So here’s a very barebones example that does validation to all the checkboxes on a page. Also, it does the following:

  • Maps the checkboxes to a dictionary object.
  • Creates a custom model binder (that maps the checkbox values to the dictionary object).

Let’s get started.

The Model

public class Event
{
  public string Name { get; set; }
  public Dictionary Weekdays { get; set; }
}

The Custom Model Binder

  public class EventBinder : IModelBinder
  {
    public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
    {
      Event ev = new Event();      

      ev.Name = controllerContext.HttpContext.Request.Form["Name"];

      // Let's initialize all the keys to false
      ev.Weekdays = new Dictionary()
      {
        {"Monday", false },
        {"Tuesday", false },
        {"Wednesday", false },
        {"Thursday", false },
        {"Friday", false },
      };      

      // Now let's trigger to true when the checkbox is checked.
      if (controllerContext.HttpContext.Request.Form["Weekdays[Monday]"] == "on")
        ev.Weekdays["Monday"] = true;

      if (controllerContext.HttpContext.Request.Form["Weekdays[Tuesday]"] == "on")
        ev.Weekdays["Tuesday"] = true;

      if (controllerContext.HttpContext.Request.Form["Weekdays[Wednesday]"] == "on")
        ev.Weekdays["Wednesday"] = true;

      if (controllerContext.HttpContext.Request.Form["Weekdays[Thursday]"] == "on")
        ev.Weekdays["Thursday"] = true;

      if (controllerContext.HttpContext.Request.Form["Weekdays[Friday]"] == "on")
        ev.Weekdays["Friday"] = true;

      return ev;
    }
  }

The Controller

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using MvcApplication6.Models;

namespace MvcApplication6.Controllers
{
  public class HomeController : Controller
  {
    public ActionResult Index()
    {
      // We need to instantiate an event object before setting the checkboxes to default.
      Event e = new Event();      

      // Let's initialize the checkboxes.
      e.Weekdays = new Dictionary()
      {
        {"Monday",    false },
        {"Tuesday",   false },
        {"Wednesday", false },
        {"Thursday",  false },
        {"Friday",    false },
      };

      ViewBag.IsFormValid = false;

      return View("Home", e);
    }

    [HttpPost]
    public ActionResult Index([ModelBinder(typeof(EventBinder))] Event e)
    {
      // Let's validate the name.
      if (e.Name.Length == 0)
      {
        ModelState.AddModelError("Name", "First name is required.");
      }

      // Validate the checkbox. Make sure they are all checked.
      foreach (var day in e.Weekdays)
      {
        if (day.Value == false)
        {
          ModelState.AddModelError("Weekdays[" + day.Key + "]", day.Key + " has to be checked!" );
        }
      }

      ViewBag.IsFormValid = false;

      if (ModelState.IsValid)
      {
        ViewBag.IsFormValid = true;
      }        

      return View("Home", e);
    }
  }
}

The View

@model MvcApplication6.Models.Event
@{
  Layout = null;
}

<!--
    .formErrors
    {
      color: Red;
    }

-->
<div>
<h1>Form Is: &lt;&lt; @ViewBag.IsFormValid &gt;&gt;</h1>
<h4>Fill out the name and check every checkbox so the form becomes true (valid).</h4>
@using (@Html.BeginForm("Index", "Home"))
    {
      // FirstName TextBox
      @Html.LabelFor(model =&gt; model.Name, "FirstName")
      @Html.TextBoxFor(model =&gt; model.Name, "FirstName")
      @Html.ValidationMessageFor(model =&gt; model.Name, null, new { @class = "formErrors" })   
<hr />

      // Weekday Checkboxes
      foreach (var day in Model.Weekdays)
      {
 checked="checked" 
        }
        /&gt; @day.Key

        @Html.ValidationMessageFor(model =&gt; model.Weekdays[day.Key], null, new {@class="formErrors"})

      }      

    }</div>