Skip to content

CategoryWeb Development

Validating Checkboxes in ASP.NET MVC

So here’s a very barebones example that does validation to all the checkboxes on a page. Also, it does the following:

  • Maps the checkboxes to a dictionary object.
  • Creates a custom model binder (that maps the checkbox values to the dictionary object).

Let’s get started.

The Model

public class Event
{
  public string Name { get; set; }
  public Dictionary Weekdays { get; set; }
}

The Custom Model Binder

  public class EventBinder : IModelBinder
  {
    public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
    {
      Event ev = new Event();      

      ev.Name = controllerContext.HttpContext.Request.Form["Name"];

      // Let's initialize all the keys to false
      ev.Weekdays = new Dictionary()
      {
        {"Monday", false },
        {"Tuesday", false },
        {"Wednesday", false },
        {"Thursday", false },
        {"Friday", false },
      };      

      // Now let's trigger to true when the checkbox is checked.
      if (controllerContext.HttpContext.Request.Form["Weekdays[Monday]"] == "on")
        ev.Weekdays["Monday"] = true;

      if (controllerContext.HttpContext.Request.Form["Weekdays[Tuesday]"] == "on")
        ev.Weekdays["Tuesday"] = true;

      if (controllerContext.HttpContext.Request.Form["Weekdays[Wednesday]"] == "on")
        ev.Weekdays["Wednesday"] = true;

      if (controllerContext.HttpContext.Request.Form["Weekdays[Thursday]"] == "on")
        ev.Weekdays["Thursday"] = true;

      if (controllerContext.HttpContext.Request.Form["Weekdays[Friday]"] == "on")
        ev.Weekdays["Friday"] = true;

      return ev;
    }
  }

The Controller

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using MvcApplication6.Models;

namespace MvcApplication6.Controllers
{
  public class HomeController : Controller
  {
    public ActionResult Index()
    {
      // We need to instantiate an event object before setting the checkboxes to default.
      Event e = new Event();      

      // Let's initialize the checkboxes.
      e.Weekdays = new Dictionary()
      {
        {"Monday",    false },
        {"Tuesday",   false },
        {"Wednesday", false },
        {"Thursday",  false },
        {"Friday",    false },
      };

      ViewBag.IsFormValid = false;

      return View("Home", e);
    }

    [HttpPost]
    public ActionResult Index([ModelBinder(typeof(EventBinder))] Event e)
    {
      // Let's validate the name.
      if (e.Name.Length == 0)
      {
        ModelState.AddModelError("Name", "First name is required.");
      }

      // Validate the checkbox. Make sure they are all checked.
      foreach (var day in e.Weekdays)
      {
        if (day.Value == false)
        {
          ModelState.AddModelError("Weekdays[" + day.Key + "]", day.Key + " has to be checked!" );
        }
      }

      ViewBag.IsFormValid = false;

      if (ModelState.IsValid)
      {
        ViewBag.IsFormValid = true;
      }        

      return View("Home", e);
    }
  }
}

The View

@model MvcApplication6.Models.Event
@{
  Layout = null;
}

<!--
    .formErrors
    {
      color: Red;
    }

-->
<div>
<h1>Form Is: &lt;&lt; @ViewBag.IsFormValid &gt;&gt;</h1>
<h4>Fill out the name and check every checkbox so the form becomes true (valid).</h4>
@using (@Html.BeginForm("Index", "Home"))
    {
      // FirstName TextBox
      @Html.LabelFor(model =&gt; model.Name, "FirstName")
      @Html.TextBoxFor(model =&gt; model.Name, "FirstName")
      @Html.ValidationMessageFor(model =&gt; model.Name, null, new { @class = "formErrors" })   
<hr />

      // Weekday Checkboxes
      foreach (var day in Model.Weekdays)
      {
 checked="checked" 
        }
        /&gt; @day.Key

        @Html.ValidationMessageFor(model =&gt; model.Weekdays[day.Key], null, new {@class="formErrors"})

      }      

    }</div>

Kooboo Setup Notes

I had to apply these settings to make Kooboo work on my machine connecting to MongoDB on my VM:

In Web.config:

For the MongoDB.config (when you follow the instructions to set it up http://www.kooboo.com/Documents/Detail/CMS/v3/MSSQL-SQLCE-RavenDB-and-MongoDB ), I copied it to:

Kooboo_CMS/

And you may have to copy it to C:\kooboo\Kooboo_CMS\Kooboo_CMS\bin as well. (my installation was a little flaky)

You’ll have to build the project (in VS2010) if you already ran the site (the default data store is XML files).

When all is set and done, make sure that the db was created on MongoDB. In this case, it created the db samplesite.

Kooboo CMS First Impressions

Kooboo is a CMS based on ASP.NET MVC. Recently, I got a chance to take it for a spin, and here are some of my thoughts. Keep in mind that the drawbacks here may come just from my ignorance of the tool use. :) I’ll update this as the more I learn about the inner workings.

Benefits

  • Admin Panel’s UI is intuitive for designers/programmers
    • Easy to add pages
    • Easy to add your own themes/styles
    • Easy to create your own type of content
    • Easy to add content
  • Lots of Features, more than Orchard
  • Mature, has been around for a while (2008)
  • Views are coded in Razor
  • Can connect to MongoDB and other datasource types
  • Versioning of any piece of content and view differences
  • Manage website resources easily – images/documents/etc.

Drawbacks

May not be a drawback once I figure out the “how” and get better understanding.

  • Once a site is created, when I migrated from XML to MongoDB, I lost all the website data from the XML files.
  • Admin Panel’s UI may not be intuitive to non-designers/programmers.
  • Site directory structure Kooboo generates is not the same as the traditional ASP.NET MVC.

Let’s take a look. For a site I created using Kooboo, named “batman”:

  • When a content type is created, it does not create a C# class file. (I didn’t see one at least, in the directory structure.) It does, however, create a MongoDB collection for the content, there’s just no C# class mapped to it.
  • There’s no clear way to bind a View to a model class as in traditional ASP.NET MVC since, Kooboo doesn’t create a C# class file. It doesn’t follow the traditional file/folder naming convention: for each View, you map that to a model.
  • Community not as large as other CMS communities (Orchard, Umbraco, DNN).

I’ll keep exploring, but this is what I’ve found so far.

Getting Started with the MongoDB C# Driver

Here’s my attempt on writing a super-barebones tutorial for using the MongoDB C# Driver. It’s a very hands-on tutorial-by-example, showing you the basics of the driver by creating console apps.

I’m assuming you’ve installed the official MongoDB C# driver and read the intro. Heads up on how you set up the driver. The .msi sets it up in the GAC. The zip file has the assemblies that can just be referenced in your project.

After you’ve done those two things, you can proceed. :)

Dependencies

In projects, you have the option of utilizing the following namespaces:

using MongoDB.Bson;
using MongoDB.Driver;
using MongoDB.Bson.IO;
using MongoDB.Bson.Serialization;
using MongoDB.Bson.Serialization.Attributes;
using MongoDB.Bson.Serialization.Conventions;
using MongoDB.Bson.Serialization.IdGenerators;
using MongoDB.Bson.Serialization.Options;
using MongoDB.Bson.Serialization.Serializers;
using MongoDB.Driver.Builders;
using MongoDB.Driver.GridFS;
using MongoDB.Driver.Wrappers;

The most commonly used are Bson, Driver, Serialization, and Serialization.Attributes. This tutorial will use these four in all code samples, so make sure you include them in your app, even if I don’t have it in the code samples. You can learn more about the API/namespaces by checking out the docs.

Connecting to a MongoDB Database

Connecting is pretty simple. Once you include the C# driver assemblies, you connect as follows:

namespace ConsoleApplication1
{
  class Program
  {
    static void Main(string[] args)
    {
      MongoServer  server = MongoServer.Create("mongodb://work2");
      MongoDatabase Cars = server.GetDatabase("Cars");
    }
  }
}

That’s pretty much all you need to connect. Of course, I recommend putting the connection string in the Web.config, or some initializer.

Inserting a document on MongoDB

There are tons of ways to do this, but one of the simplest ways is to create a class, create an instances of that class, and pass it into a MongoDB object that serializes that object into BSON and submits it to your database.

So for this example, before inserting a document into the database from C#, one has to do two things (aside from the establishing a connection and choosing which db to write to):

  1. Create a Generic Collection object. In this case, it’s a MongoDB collection.
  2. Create a class that’s to be used by the Generic Collection.

Those two go together.

Also, note that while working with MongoDB in C#, the word “collection” may be thrown around loosely. In MongoDB you have a “Collection” that’s comparable to a Table in a traditional ER database. A MongoDB “Collection” holds “documents,” which is comparable to a “record” in a traditional ER database. On the opposite side, in C#, a collection is a data structure that unlike an Array, can be resized dynamically and is more flexible as far as functionality.

The MongoDB Generic Collection is a C# collection that can hold any type of object, but it has to know what type of object it’s holding. You let it know what kind of type by creating a class, and then when you create a collection, you specify the name of the class. So for example, this is the syntax for creating a Generic Collection that has Car objects (where person is defined in a class):

After that line of code, you populate the vehicles object (remember it’s a MongoDB Generic collection that holds Cars objects), by calling the method “GetCollection” from the Cars object we creating in the last section “Connecting to a MongoDB database”:

In the above statement, if MongoDB does not find a collection on the database called “vehicles,” it will create it for you.

What does that Cars class look like? Pretty simple:

  public class Cars
  {
    public string _id { get; set; }
    public string brand { get; set; }
    public string lname { get; set; }
    public Int32 miles { get; set; }
    public int age { get; set; }
  }

Then of course, before sending it to the MongoDB database, you create an object and set the values of the object:

      Cars newCar = new Cars();
      newCar._id = System.Guid.NewGuid().ToString();
      newCar.brand = "Honda";
      newCar.miles = 186000;
      newCar.age = 1;

And then insert it in MongoDB:

vehicles.Insert(newCar);

To put it in perspective, here’s the entire program:

namespace ConsoleApplication1
{
  public class Cars
  {
    public string _id { get; set; }
    public string brand { get; set; }
    public string lname { get; set; }
    public Int32 miles { get; set; }
    public int age { get; set; }
  }

  class Program
  {
    static void Main(string[] args)
    {       

      MongoServer  server = MongoServer.Create("mongodb://work2");
      MongoDatabase Cars = server.GetDatabase("Cars");

      MongoCollection vehicles;

      vehicles = Cars.GetCollection("vehicles");

      Cars newCar = new Cars();
      newCar._id = System.Guid.NewGuid().ToString();
      newCar.brand = "Honda";
      newCar.miles = 186000;
      newCar.age = 1;      

      vehicles.Insert(newCar);
    }
  }
}

The insert method should serialize the Cars object (newCar) and create the MongoDB document, mapping the properties of the Cars class to MongoDB document key-pair values.

Let’s see how this got created in our database:

Run:

db.vehicles.find()

(e.g. ignore the lname property. This was removed in the source code)

Inserting Complex Objects in MongoDB

Let’s try inserting an object with a more complex data structure. Let’s assume we have the following class:

  public class Cars
  {
    public string _id { get; set; }
    public string brand { get; set; }
    public string lname { get; set; }
    public Int32 miles { get; set; }
    public int age { get; set; }

    public Hashtable history { get; set; }
    public Hashtable complex { get; set; }
    public int[] magicNumbers { get; set; }
  }

Now let’s declare the class and its properties:

      newCar._id = System.Guid.NewGuid().ToString();
      newCar.brand = "Honda";
      newCar.miles = 186000;
      newCar.age = 1;
      
      newCar.history = new Hashtable();   
      newCar.history.Add("2010-08-14", "Puss 'N Boots");
      newCar.history.Add("2011-12-01", "Puff the Magic Dragon");
      newCar.history.Add("2009-03-07", "Fraggle Rock");      

      newCar.magicNumbers = new int[3];
      newCar.magicNumbers[0] = 60;
      newCar.magicNumbers[1] = 186000;
      newCar.magicNumbers[2] = 1440;

      newCar.complex = new Hashtable();
      newCar.complex.Add( newCar.magicNumbers[1], newCar.history );
      newCar.complex.Add( System.Guid.NewGuid().ToString(), newCar.history); 

It outputs the following (formatted for better reading):

{
  "_id": "a07513c8-9cb5-46cd-a9cb-425cabc0dd76",
  "brand": "Honda",
  "lname": null,
  "miles": 186000,
  "age": 1,
  "history": 
  {
    "2010-08-14": "Puss 'N Boots",
    "2009-03-07": "Fraggle Rock",
    "2011-12-01": "Puff the Magic Dragon"
  },
  "complex": 
  [
    ["eb7b3d4d-87b1-43c1-9fe7-5a5e4b5b79de",
    {
        "2010-08-14": "Puss 'N Boots",
        "2009-03-07": "Fraggle Rock",
        "2011-12-01": "Puff the Magic Dragon"
    }],
    [
    186000,
    {
        "2010-08-14": "Puss 'N Boots",
        "2009-03-07": "Fraggle Rock",
        "2011-12-01": "Puff the Magic Dragon"
    }]
  ],
  "magicNumbers": [60, 186000, 1440]
}

Now let’s look at the entire code for all this:

namespace ConsoleApplication1
{
  public class Cars
  {     
    public string _id { get; set; }   
    public string brand { get; set; }  
    public string lname { get; set; }  
    public Int32 miles { get; set; }  
    public int age { get; set; }
    
    public Hashtable history { get; set; }
    public Hashtable complex { get; set; }
    public int[] magicNumbers { get; set; }
  }

  class Program
  {   
    static void Main(string[] args)
    {
      
      MongoServer  server = MongoServer.Create("mongodb://work2");          
      MongoDatabase Cars = server.GetDatabase("Cars");

      MongoCollection vehicles;
      
      vehicles = Cars.GetCollection("vehicles");
      
      Cars newCar = new Cars();

      newCar._id = System.Guid.NewGuid().ToString();
      newCar.brand = "Honda";
      newCar.miles = 186000;
      newCar.age = 1;
      
      newCar.history = new Hashtable();   
      newCar.history.Add("2010-08-14", "Puss 'N Boots");
      newCar.history.Add("2011-12-01", "Puff the Magic Dragon");
      newCar.history.Add("2009-03-07", "Fraggle Rock");      

      newCar.magicNumbers = new int[3];
      newCar.magicNumbers[0] = 60;
      newCar.magicNumbers[1] = 186000;
      newCar.magicNumbers[2] = 1440;

      newCar.complex = new Hashtable();
      newCar.complex.Add( newCar.magicNumbers[1], newCar.history );
      newCar.complex.Add( System.Guid.NewGuid().ToString(), newCar.history); 
      
      vehicles.Insert(newCar);         
      
      Console.ReadLine();                
    }     
  }
}

Reading the 1st MongoDB document from the database

Before you query the database, you must make sure the class and member types match the values/objects that the MongoDB connection will return.

Let’s try to fetch the 1st document. To see what we’re expecting, let’s fetch it via the console first:

db.vehicles.find().limit(1)

which returns:

{
  "_id" : ObjectId("4d95f30058430000007a5943"),
  "ObjectId" : ObjectId("000000000000000000000000"),
  "brand" : "Ford",
  "lname" : null,
  "miles" : 2500,
  "age" : 1
}

Now that we know what it returns, let’s make sure we have the class in place that will handle the kind of key-value pairs in the MongoDB collection:

  public class Cars
  {
    public MongoDB.Bson.ObjectId _id { get; set; }
    public MongoDB.Bson.ObjectId ObjectId { get; set; }
    public string brand { get; set; }
    public string lname { get; set; }
    public Int32 miles { get; set; }
    public int age { get; set; }

    // Complex Properties
    public Hashtable history { get; set; }
    public Hashtable complex { get; set; }
    public int[] magicNumbers { get; set; }
  }

Notice that the properties history, complex, and magicNumbers are not part of the MongoDB document we queried. That’s OK. The MongoDB C# drivers are smart enough to map to to a null value when deserializing from MongoDB to a C# object.

Now how do you map values from a MongoDB connection to a C# object? Like so:

    Cars car = vehicles.FindOne();

The FindOne() method will return the first document. Now let’s add some context to this line of code, by adding MongoDB connection code (that we established before), writing it to the screen, and checking for nulls in the complex properties:

namespace ConsoleApplication1
{
  public class Cars
  {
    public MongoDB.Bson.ObjectId _id { get; set; }
    public MongoDB.Bson.ObjectId ObjectId { get; set; }
    public string brand { get; set; }
    public string lname { get; set; }
    public Int32 miles { get; set; }
    public int age { get; set; }

    // Complex Properties
    public Hashtable history { get; set; }
    public Hashtable complex { get; set; }
    public int[] magicNumbers { get; set; }
  }

  class Program
  {
    static void Main(string[] args)
    {
      MongoServer  server = MongoServer.Create("mongodb://work2");
      MongoDatabase Cars = server.GetDatabase("Cars");

      MongoCollection vehicles;
      vehicles = Cars.GetCollection("vehicles");

      Cars car = vehicles.FindOne();

      Console.WriteLine(car._id);
      Console.WriteLine(car.ObjectId);
      Console.WriteLine(car.brand);
      Console.WriteLine(car.lname);
      Console.WriteLine(car.miles);
      Console.WriteLine(car.age);

      if (car.history == null)
        Console.WriteLine("history was null");
      else
        Console.WriteLine(car.history);

      if(car.complex == null)
        Console.WriteLine("complex was null");
      else
        Console.WriteLine(car.complex);

      if (car.magicNumbers == null)
        Console.WriteLine("magicNumbers was null");
      else
        Console.WriteLine(car.magicNumbers);                  

      Console.WriteLine("ha made it this far!");
      Console.ReadLine();
    }
  }
}

This will print the following on to the screen:

Read All Documents from a MongoDB Collection

Let’s try to output all the documents from a collection. We’re going to create a new MongoDB collection under the Cars database. Through the MongoDB client, we’ve created four documents. Let’s read them from the console first:

db.bankaccount.find()

The four documents look like:

{ "_id" : ObjectId("4d9650301b6e000000004116"), "name" : "Batman" }
{ "_id" : ObjectId("4d9650391b6e000000004117"), "name" : "Spider-Man" }
{ "_id" : ObjectId("4d96503f1b6e000000004118"), "name" : "Superman" }
{ "_id" : ObjectId("4d9650431b6e000000004119"), "name" : "Spawn" }

So the first thing is to create a class that represents each MongoDB document:

  
public class BankAccount
  {
    public MongoDB.Bson.ObjectId _id;
    public string name;
  }

The class is used to create an object that will contain the data that is deserialized (from MongoDB to a C# object).

Here’s the code to query all documents, iterating through them and showing them on the screen:

      MongoCursor cursor = BankAccountCollection.FindAll();

      foreach (BankAccount ba in cursor)
      {
        Console.WriteLine(ba._id);
        Console.WriteLine(ba.name);
        Console.WriteLine("=============================");
      }

Looks straightforward enough. What is a cursor? We can think of a MongoDB Cursor as a DataSet in .NET. Later, we’ll see MongoDB Queries, which resemble a SqlCommand object in .NET. Queries are configurable objects to search for document data. The Query object is then passed into the a find method (there are several) of the MongoCollection. The find method then typically returns a Cursor, which is then iterated through. (More about this later.) Also note that a Cursor is not “populated” with documents until the there’s an attempt to retrieve the first result from the server.

Let’s see the entire program in action.

namespace ConsoleApplication1
{
  public class BankAccount
  {
    public MongoDB.Bson.ObjectId _id;
    public string name;
  }

  class Program
  {
    static void Main(string[] args)
    {
      MongoServer  server = MongoServer.Create("mongodb://work2");
      MongoDatabase BankAccountDatabase = server.GetDatabase("Cars");

      MongoCollection BankAccountCollection;
      BankAccountCollection = BankAccountDatabase.GetCollection("bankaccount");

      MongoCursor cursor = BankAccountCollection.FindAll();

      foreach (BankAccount ba in cursor)
      {
        Console.WriteLine(ba._id);
        Console.WriteLine(ba.name);
        Console.WriteLine("=============================");
      }      

      Console.WriteLine("\n\nha made it this far!");
      Console.ReadLine();
    }
  }
}

The above program outputs the following:

If we wanted to retrieve a specific query, rather than doing this (from the above code):

     
 MongoCursor cursor = BankAccountCollection.FindAll();

We would do:

      var query = Query.EQ("name", "Batman");
      var cursor = BankAccountCollection.Find(query);

One last thing to mention about the Find() method, as stated from docs:
The Find method (and its variations) don’t immediately return the actual results of a query. Instead they return a cursor that can be enumerated to retrieve the results of the query. The query isn’t actually sent to the server until we attempt to retrieve the first result (technically, when MoveNext is called for the first time on the enumerator returned by GetEnumerator). This means that we can control the results of the query in interesting ways by modifying the cursor before fetching the results.

Get only key-value pairs from a collection

In this example, we’re going to try to do something like this in SQL:

SELECT Column1, Column2
FROM  Table

We’re also going to be using the collection “bbu”. Let’s try to translate this query:

db.bbu.find({},{whichAPI:1}).limit(10)

Which would return the following:

{ "_id" : ObjectId("4d90fbd85843000000004359"), "whichAPI" : "SL" }
{ "_id" : ObjectId("4d90fbd8584300000000435a"), "whichAPI" : "SL" }
{ "_id" : ObjectId("4d90fbd8584300000000435b"), "whichAPI" : "SIT" }
{ "_id" : ObjectId("4d90fbd8584300000000435c"), "whichAPI" : "SL" }
{ "_id" : ObjectId("4d90fbd8584300000000435d"), "whichAPI" : "SL" }

First, we’re going to define the class. We’re going to establish all the properties that are needed for a class, but then only query and populate two colums from the class.

Define the class and all of its properties.

public class BBU
  {
    public MongoDB.Bson.ObjectId _id;
    public int counter;
    public string whichAPI;
  }

Now the actual program:

  class Program
  {
    static void Main(string[] args)
    {    

      MongoServer  server = MongoServer.Create("mongodb://work2");
      MongoDatabase BBUDatabase = server.GetDatabase("bbu");

      MongoCollection BBUCollection;
      BBUCollection = BBUDatabase.GetCollection("bbu");

      MongoCursor cursor = BBUCollection.FindAll().SetFields("id","whichAPI").SetLimit(5);

      foreach (BBU ba in cursor)
      {
        Console.WriteLine(ba._id + "----" + ba.whichAPI);
        Console.WriteLine("=============================");
      }      

      Console.WriteLine("\n\nha made it this far!");
      Console.ReadLine();                        

    }
  }

In the example above, SetLimit(5) will only return the top 5 documents.

Conclusion

I hope you got a good a sense of the basics of using the MongoDB C# driver. I’ve only scratched the surface of the API. I’ll be discussing more advanced topics in future posts.

For the meantime, I point you to the MongoDB Google Group: http://groups.google.com/group/mongodb-user?pli=1 where you’ll get informative responses, usually very quickly.

I also recommend you following the following folks on Twitter and these blogs/sites:

Lastly, I recommend you check out the MongoDB video presentations using the C# Driver and working in a Microsoft environment.

Companies Who Use ASP.NET

During a job hunt, I compiled a list of companies that use ASP.NET. Most of them, without a surprise, are big corporate ones. Also, not all of them use ASP.NET exclusively for their entire site/business. Many of these sites have subsites for their many divisions, so a mix of technologies are used.

I’m going to keep updating as I find more.

Microsoft Amazon
 

Encrypt In ColdFusion and Decrypt in PHP

A while ago, I had to integrate vBulletin into a CMS application. The CMS was done in ColdFusion and had to pass info encrypted data over to vBulletin which is PHP. It would be naive to think that one could just pick an encryption scheme (which are standards like Blowfish, AES, SHA, etc.) and look up in the reference manual for both languages … and that would be it? Ha! While testing these, I realized that each language has its own options (or quirks; however you want to look at it). In addition, you have to consider Base64 encoding and cipher blocks.

Here’s what I used for ColdFusion to do encryption. I’m listing two methods from two different sources

Method 1: Encryption uses blowfish with base64 encoding, using the block cipher mode ECB and Java objects.

<cffunction name="EncryptBlowfish1" returntype="string" hint="Encrypts string using Blowfish Algorithm"
  description="Encryption uses blowfish with base64 encoding, using the block cipher mode ECB.">

  <!--- This function serves as a wrapper for enhanced Blowfish encryption. Based on discussion --->
  <!--- from http://www.experts-exchange.com/Web_Development/Web_Languages-Standards/Cold_Fusion_Markup_Language/Q_24753297.html --->

  <cfargument name="Data" required="true" type="string" hint="Text to encrypt" />  
  <cfargument name="Key" required="true" type="string" hint="16-char key to be used for encryption." />
  
  <!--- get a cipher instance --->
  <cfset Cipher = createObject("java", "javax.crypto.Cipher")>
  <cfset encryptor = Cipher.getInstance("Blowfish/ECB/PKCS5Padding")>
  
  <!--- must convert the key string into a KeySpec object first --->
  <cfset keySpec = createObject("java", "javax.crypto.spec.SecretKeySpec").init(Arguments.Key.getBytes(), "Blowfish")>
  
  <!--- initialize the cipher for encrypting --->
  <cfset encryptor.init(Cipher.ENCRYPT_MODE, keySpec) />
  
  <!--- do the encrypt --->
  <cfset encryptedTextFromJava = encryptor.doFinal(Arguments.Data.getBytes()) />
  
  <!--- finally convert it to base64 and return --->  
  <cfreturn BinaryEncode(encryptedTextFromJava, "Base64") />

</cffunction>

Method 2: Much faster.

<!--- THIS IS MUCH FASTER THAN EncryptBlowfish1 --->
<cffunction name="EncryptBlowfish2" returntype="string" hint="Encrypts string using Blowfish Algorithm"
  description="Encryption uses blowfish with base64 encoding, using the block cipher mode ECB.">

  <!--- This function serves as a wrapper for enhanced Blowfish encryption. Based on discussion --->
  <!--- from http://www.petefreitag.com/item/222.cfm --->

  <cfargument name="Data" required="true" type="string" hint="Text to encrypt" />  
  <cfargument name="Key" required="true" type="string" hint="16-char key to be used for encryption." />

  <cfreturn Encrypt( Arguments.Data, ToBase64( Arguments.Key ), "BLOWFISH", "Base64" ) />  

</cffunction>

Now let’s test it!

<!--- YOU'LL NEED THIS TO DECRYPT! --->
<cfset myKey = "dan is too uber!" />

<cfoutput>
#EncryptBlowfish1( "This is so friggin cool that it works!", myKey )#
<hr />
#EncryptBlowfish2( "This is so friggin cool that it works!", myKey )#
</cfoutput>

This is how you would decrypt it in PHP:

$__key__ = "dan is too uber!";

function DecryptBlowfish( $data, $key )
{
  // Encryption uses blowfish with base64 encoding, using the block cipher mode ECB 
  // to decrypt the $data string.
  //    $data = the text to be decrypted.
  //    $key  = the key used to be used, not base64 encoded.  
  
  return mcrypt_decrypt( MCRYPT_BLOWFISH, $key, base64_decode( $data ), MCRYPT_MODE_ECB );
}

Model Binding Checkboxes to a List

Model binding things other than textboxes could be tricky in ASP.NET MVC. To correctly model bind, one has to follow a specific naming/value conventions in the HTML.

Let’s say we have a model object called People, and we have a List of states people have lived in.

  public class People
  {
    public List StatesLiveIn { get; set; }
  }

And in our View we have as follows:

@model TestBinder.Models.People

foreach (string state in new List() { "NY","NJ","FL","IL","TX" })
{     
  @state <br />      
}

Out controller simple looks like this:

    public ActionResult Index()
    {
      return View();
    }

    [HttpPost]
    public ActionResult Index(People p)
    {
      return View(p);
    }

If we put a trace point at the return line, and inspect after the user has submitted only NY and TX…

…we’ll see that it was correctly bound…

Asynchronous Upload to Amazon S3 (S3)

Requirements

Make sure you have an Amazon S3 Account (get it from tech). JavaScript is mandatory for this to work (to be able to POST to two different domains) upon user submission.

Summary of Problem

This is a summary outlining the solution used in a video uploader. It entailed a form that would leverage the user of our Amazon S3 (S3) account. In addition, because the video files could be large (and to avoid CF limitations), a critical requirement was to upload to that page S3 directly. At the same time, the form field information had to be saved to our database. This meant doing a double post – one to the XYZ domain, and the other to s3.amazon.com. This cross-domain POST could only be done via AJAX.

Here’s visualization:

As you can see, once the user clicks “Submit”, there’s an AJAX HTTP POST to XZY server to save the fields to the database, and then the JavaScript runs a form.submit() on the current form to submit the file via POST to S3.

Introducing Amazon S3

Amazon S3 (Simple Storage Service) is a cloud service whose sole purpose is to store files. To store files into it, one can use its REST API or Admin Web Console (AWS).

Screenshot of the AWS console, with Amazon S3 tab selected.

One gets to the AWS console via an account (can get it from tech) and going to http://aws.amazon.com/s3/ and clicking “Sign in.”

While S3 has many advantages, there are a set of drawbacks as well. To summarize, here’s a list:

Benefits:

  • The max upload size for a file is 5GB. Plenty.
  • For all the times I’ve tested, uploads of different file sizes, everything has gone super smoothly – like butter – so definitely reliable.
  • Amazon is super scalable (as you may already know), so parallel uploading from one or many users is really no problem.
  • Would not affect performance of our servers – there could be many uploads, and they would go fast, without slowing down any other web sites on servers.
  • The speed of the upload is limited to the user’s computer’s specs and internet provider – much faster than our servers.
  • Files can be made secure and unreadable, not just through obscurity – this is sometimes tricky to implement in ColdFusion.

Drawbacks:

To summarize, the reason for some of the drawbacks, is because it’s doing a POST request directly from one domain (us) to another (s3.amazonaws.com). It’s not being channeled through our CF servers.

There are two ways to interact with S3: the REST API, and doing a direct POST. With the REST API, the upload data has to be channeled through a server first before sending to Amazon – this was not what we were looking for, since our servers have issues with large files. So we looked into removing ourselves as the middleman and sending the data directly to S3 – via POST.

Here are the drawbacks, mainly three:

  • If S3 detected an error in the upload, e.g. if the file is too large, there’s no default error page, just a redirect to an XML document hosted on s3.amazonaws.com. There’s no way to set an error page – it’s on Amazon’s to-do list for future release. One can’t even customize the look and feel of the XML document you’re redirected to. Side note: if the upload was successful, it gets redirected to a page you specify (at least there’s some control here).
  • Progress bar reusable code is scare. There’s tons of code out there to do this, however, I could not find one that could cross-domain post. With traditional AJAX, you’re only allowed to do a POST/GET if the URL you’re using is the same domain as the caller page. One could get the code for a progress bar plugin (as there are tons out there) and rewrite it to do a POST and work with Amazon S3 – but that would take a considerate amount of work.
  • Lack of documentation. There’s not enough documentation for handling POST requests in the official Amazon Developer docs, which makes troubleshooting difficult. Doing POST submits is a relatively new feature of Amazon S3, compared to the rest of their APIs.

So the largest hurdle is to code functionality to get around the error page, since some JavaScript magic has to be put in place. That would be another day or so of work just for that, I believe. I already have some code in place that I put together while testing. If we left it as-is, when the user uploads, and if there was an error, the user would see something like this:

Which would, of course, be nonsensical. If the file was too large, they would see a message that the file was too large within the message tags. The user would then have to hit back to return to the form.

We can try another way, probably the easiest. When the user hits submits, it starts showing the animated spinner as it’s uploading. Also, we can tell the user that if he encounters an error page, just hit the back button. Also, keep in mind that there’ll be validation in place before the upload to check for file extension, at the very least. The only edge case to seeing that XML error message is if the file they submitted is over the limit *AND*  they have JavaScript turned off (that overrides the JavaScript file extension validation).

Creating a Basic HTML that POSTs a File to Amazon

Step 1 – Create a Bucket / Folder / Object:
The first thing we need to do a is create a Bucket on S3. To do this the easy way, go to the AWS: https://console.aws.amazon.com/s3/home and create a bucket:

Buckets are where you store your objects (i.e. your files of any format). You can create a folder for further organization:

As you can see here, there are 4 folders here. We can double-click on step1_new_submissions folder and see the objects that are contained within this:

You can right-click on one of those objects (files) and click “Properties”:

Notice that a Properties panel will expand below. To the right, you have three tabs: Details, Permissions, Metadata.

If you click on the Permissions Tab you’ll notice that by default the file that was selected has following permissions set by user USERNAME:

Go back to the details tab and click on the Link:

You’ll notice that you’ll be taken to an XML document in your browser that has the following:

It’s because you have not let it public access. To give it public access, you click on the Permissions tab again, and click “Add more permissions” , set the Grantee to “Everyone” and choose Open/Download and Save.

You can also set the make multiple objects public. Select an object, hold the SHIFT key, then select the other object to select the objects in between. Select “Make Public”:

You can also upload an object manually via the “Upload” button:

Then click “Add more files”

As the file starts uploading, you’ll see the bottom panel show details about the file transfer:

Step 2: Setting up the HTML Form

The S3 REST API is very flexible, as long as you execute the proper method from your application, while at the same time sending the file over from your server to S3 (via a REST method with the correct URI). Traditionally, it would look like this:

Notice how there’s a middle-man that serves as the captor of the data submitted by the form and the file. Then, it sends it long to S3. The middle-man here is crucial. Double the total bandwidth is spent here – the bandwidth to go from the user’s machine to the web server (in this case CF), and then the bandwidth spent transferring the file to S3.

The advantage to this layout is that because the web server acts as a middle-man server, it can modify the data, change its filename, and slice-and-dice anything within the file because the file submitted has to go through it first. Once the middle-man is done, then it sends it to the S3. Drawback is that there’s wasted resources from the middle-man, not to mention there may be limitations on the middle-man to handle large files > 1GB .

As a solution, S3 has a POST method solution where you can POST the file directly to S3:

Setting up the Form tag

Let’s see how we can cross-domain (a domain other than ours) to S3. Rather than doing the following (to post to the same domain):


We do the following:


Where “PastaVideos” is the name of the bucket.

The format of the object URI is as follows:

Step 3: Setting up the Other Form Fields

This is where things get interesting. In order to set up an HTML form that can upload straight to S3, there’s a set of required input fields in the form. They are as follows:


Optional Form Fields

IMPORTANT: If you add any other additional form fields, it will throw an error. If there is in fact a need to add extra form fields, which will be pasted to another server, then you must append the prefix “x-ignore-“. Let’s say for example I have three input fields I want S3 to ignore, then do as follows:





This is completely legal and will not throw errors.

Grabbing x-ignore- fields in ColdFusion

If you want to grab these form variables via ColdFusion, do something like

Form.x-ignore-lastname

Will not suffice because of the dashes. You’ll have use the bracket/quotes format:

Form[“x-ignore-lastname”]

to grab them.

Also to check for existence or set a default value,


Or


will not work.

You’ll have to use StructKeyexists( Form, “x-ignore-termsagree” ) to check for existence.

HTML Form Example

Putting all variables together from the previous table, we get something something like as follows:

  





      
<!--- Ignore All This Stuff... -->

  

Using Amazon’s HTML POST Form Generator

Because setting up the above HTML for the form could be tricky, Amazon has a tool that easily generates the HTML for the above code.

The following is a screenshot of the tool. The form can be found at: http://s3.amazonaws.com/doc/s3-example-code/post/post_sample.html


So the first thing you do is:

1. Fill out the IDs:

Where AWS ID = Access Key ID   and AWS Key = Secret Access Key

2. The next thing we’ll do is fill in the POST URL:

3. The third step is the trickiest step. This is a JSON document that must adhere to the JSON spec. By default, there’s already a default boilerplate JSON document there. Let’s analyze what it means:

Let’s use one a real one from the test-pastavideos.XYZweb.com page:

You’ll notice that it has content-length-range, which checks the max size, in this case being 1 GB, and it also redirects to the index.cfm page when successful.

After you copy and paste that JSON policy, press “Add Policy”. Notice how the fields in the section “Sample Form Based on the Above Fields” has been populated.

4. The fields may look something like this:

5. Now click on Generate HTML:

Note that with the HTML code above, whatever the user uploads will be renamed to “testfile.txt” on S3. To retain the user’s filename, you have to switch it to value=”${filename}”

6. You then copy that generated HTML and paste it into your page. Add any necessary optional fields with the x-ignore-prefixes.

That should give you a basic template for uploading to S3.

NOTE: You cannot change the values of any <input> fields except except the key. More about this in the next section.

Assigning the a unique ID to the object

Keep in mind of these items when assigning a unique filename:

  • You cannot change the user’s filename in JavaScript – once the user selects a file from his computer, you cannot append a GUID because JavaScript will not let you set the value of a file textbox, you can only read.
  • You cannot change append the GUID prefix after the form is submitted, because the filename will go to S3’s server, and there’s no way to run conditional logic once it’s on S3.

To get around these limitations, you generate the GUID or rather a ColdFusion UUID, then append it to your filename. Let’s take a look at an example of the pasta video form:

First let’s show the JSON policy document:

{"expiration": "2018-10-21T00:00:00Z",
  "conditions": 
  [ 
    {"bucket": "PastaVideos"}, 
    ["starts-with", "$key", "step1_new_submissions/"],
    {"acl": "private"},
    ["starts-with", "$Content-Type", "video/mov"],
    ["content-length-range", 0, 10737774],
    {"success_action_redirect": "http://pastavideos.XYZ.com/index.cfm"}
  ]
}

And now the HTML / ColdFusion code:

        
 

<!--- Start of Amazon S3 specific variables. -->
   

In the code above, when the HTML is rendered, it will already have filename S3 will use when it’s finished uploading. Remember, this is not the value of the <input type=”file” /> box.  So the user uploads, it will look like this on the AWS Console:

Now why is the action set to http://www.XYZ.com/index.cfm and method set to get? The HTML has this set, but when the page is loaded, JavaScript immediately runs and changes action to http://s3.amazonaws.com/PastaVideos and method to post:


  <!--- Only change the form variables if JavaScript is turned on.  -->  
  $( "##videoform" ).attr( "action", "#Variables.postURL#" ); 
  $( "##videoform" ).attr( "method", "post" );     

This is so that if JavaScript is turned off, it doesn’t POST to S3.

Other Sources of Information

Amazon S3 POST Example in HTML

AWS

Helpful Resources

Test Form

Documentation Home Page

Workflow using WS3 policies

Helpful Posts

Asynchronous Upload to Amazon S3 (S3)

Last Updated: 11/29/2010

Author: Dan Romero

Table of Contents

Summary of Problem.. 2

Introducing Amazon S3. 2

Benefits: 3

Drawbacks: 3

Creating a Basic HTML that POSTs a File to Amazon. 5

Step 1 – Create a Bucket / Folder / Object: 5

Step 2: Setting up the HTML Form.. 9

Setting up the Form tag. 10

Step 3: Setting up the Other Form Fields. 11

Optional Form Fields. 13

Grabbing x-ignore- fields in ColdFusion. 13

HTML Form Example. 14

Using Amazon’s HTML POST Form Generator. 14

Assigning the a unique ID to the object. 18

Other Sources of Information. 19

Requirements

Make sure you have an Amazon S3 Account (get it from tech).

JavaScript is mandatory for this to work (to be able to POST to two different domains) upon user
submission.

Summary of Problem

This is a summary outlining the solution used in a video uploader. It entailed a form that would leverage the user of our Amazon S3 (S3) account. In addition, because the video files could be large (and to avoid CF limitations), a critical requirement was to upload to that page S3 directly. At the same time, the form field information had to be saved to our database. This meant doing a double post – one to the XYZ domain, and the other to s3.amazon.com. This cross-domain POST could only be done via AJAX.

Here’s visualization:

As you can see, once the user clicks “Submit”, there’s an AJAX HTTP POST to
XZY server to save the fields to the database, and then the JavaScript runs a
form.submit() on the current form to submit the file via POST to S3.

Introducing Amazon S3

Amazon S3 (Simple Storage Service) is a cloud service whose sole purpose is to store files. To store files into it, one can use its REST API or Admin Web Console (AWS).

Screenshot of the AWS console, with Amazon S3 tab selected.

One gets to the AWS console via an account (can get it from tech) and going to http://aws.amazon.com/s3/ and clicking “Sign in.”

While S3 has many advantages, there are a set of drawbacks as well. To summarize, here’s a list:

Benefits:

The max upload size for a file is 5GB. Plenty.

For all the times I’ve tested, uploads of different file sizes, everything has gone super smoothly – like butter – so definitely reliable.

Amazon is super scalable (as you may already know), so parallel uploading from one or many users is really no problem.

Would not affect performance of our servers – there could be many uploads, and they would go fast, without slowing down any other web sites on servers.

The speed of the upload is limited to the user’s computer’s specs and internet provider – much faster than our servers.

Files can be made secure and unreadable, not just through obscurity – this is sometimes tricky to implement in ColdFusion.

Drawbacks:

To summarize, the reason for some of the drawbacks, is because it’s doing a POST request directly from one domain (us) to another (s3.amazonaws.com). It’s not being channeled through our CF servers.

There are two ways to interact with S3: the REST API, and doing a direct POST. With the REST API, the upload data has to be channeled through a server first before sending to Amazon – this was not what we were looking for, since our servers have issues with large files. So we looked into removing ourselves as the middleman and sending the data directly to S3 – via POST.

Here are the drawbacks, mainly three:

If S3 detected an error in the upload, e.g. if the file is too large, there’s no default error page, just a redirect to an XML document hosted on s3.amazonaws.com. There’s no way to set an error page – it’s on Amazon’s to-do list for future release. One can’t even customize the look and feel of the XML document you’re redirected to. Side note: if the upload was successful, it gets redirected to a page you specify (at least there’s some control here).

Progress bar reusable code is scare. There’s tons of code out there to do this, however, I could not find one that could cross-domain post. With traditional AJAX, you’re only allowed to do a POST/GET if the URL you’re using is the same domain as the caller page. One could get the code for a progress bar plugin (as there are tons out there) and rewrite it to do a POST and work with Amazon S3 – but that would take a considerate amount of work.

Lack of documentation. There’s not enough documentation for handling POST requests in the official Amazon Developer docs, which makes troubleshooting difficult. Doing POST submits is a relatively new feature of Amazon S3, compared to the rest of their APIs.

So the largest hurdle is to code functionality to get around the error page, since some JavaScript magic has to be put in place. That would be another day or so of work just for that, I believe. I already have some code in place that I put together while testing. If we left it as-is, when the user uploads, and if there was an error, the user would see something like this:

Which would, of course, be nonsensical. If the file was too large, they would see a message that the file was too large within the message tags. The user would then have to hit back to return to the form.

We can try another way, probably the easiest. When the user hits submits, it starts showing the animated spinner as it’s uploading. Also, we can tell the user that if he encounters an error page, just hit the back button. Also, keep in mind that there’ll be validation in place before the upload to check for file extension, at the very least. The only edge case to seeing that XML error message is if the file they submitted is over the limit *AND*  they have JavaScript turned off (that overrides the JavaScript file extension validation).

Creating a Basic HTML that POSTs a File to Amazon

Step 1 – Create a Bucket / Folder / Object:
The first thing we need to do a is create a Bucket on S3. To do this the easy way, go to the AWS: https://console.aws.amazon.com/s3/home  and create a bucket:

Buckets are where you store your objects (i.e. your files of any format). You can create a folder for further organization:

As you can see here, there are 4 folders here. We can double-click on step1_new_submissions folder and see the objects that are contained within this:

You can right-click on one of those objects (files) and click “Properties”:

Notice that a Properties panel will expand below. To the right, you have three tabs: Details, Permissions, Metadata.

If you click on the Permissions Tab you’ll notice that by default the file that was selected has following permissions set by user USERNAME:

Go back to the details tab and click on the Link:

You’ll notice that you’ll be taken to an XML document in your browser that has the following:

It’s because you have not let it public access. To give it public access, you click on the Permissions tab again, and click “Add more permissions” , set the Grantee to “Everyone” and choose Open/Download and Save.

You can also set the make multiple objects public. Select an object, hold the SHIFT key, then select the other object to select the objects in between. Select “Make Public”:

You can also upload an object manually via the “Upload” button:

Then click “Add more files”

As the file starts uploading, you’ll see the bottom panel show details about the file transfer:

Step 2: Setting up the HTML Form

The S3 REST API is very flexible, as long as you execute the proper method from your application, while at the same time sending the file over from your server to S3 (via a REST method with the correct URI). Traditionally, it would look like this:

Notice how there’s a middle-man that serves as the captor of the data submitted by the form and the file. Then, it sends it long to S3. The middle-man here is crucial. Double the total bandwidth is spent here – the bandwidth to go from the user’s machine to the web server (in this case CF), and then the bandwidth spent transferring the file to S3.

The advantage to this layout is that because the web server acts as a middle-man server, it can modify the data, change its filename, and slice-and-dice anything within the file because the file submitted has to go through it first. Once the middle-man is done, then it sends it to the S3. Drawback is that there’s wasted resources from the middle-man, not to mention there may be limitations on the middle-man to handle large files > 1GB .

As a solution, S3 has a POST method solution where you can POST the file directly to S3:

Setting up the Form tag

Let’s see how we can cross-domain (a domain other than ours) to S3. Rather than doing the following (to post to the same domain):

<form action=”#CGI.SCRIPT_NAME#” method=”post” enctype=”multipart/form-data”>

We do the following:

<form action=”http://s3.amazonaws.com/PastaVideos” method=”post” enctype=”multipart/form-data”>

Where “PastaVideos” is the name of the bucket.

The format of the object URI is as follows:

Step 3: Setting up the Other Form Fields

This is where things get interesting. In order to set up an HTML form that can upload straight to S3, there’s a set of required input fields in the form. They are as follows:

Tag
Type
Name
Value

input
hidden
key
The location of the object you’ll be uploading. You can consider it as the concatenation of the folder(s) and the filename (don’t use the bucket name):

step1_new_submissions/4059C3_${filename}

The ${filename} is the original filename of the file the user is uploading. The “4059C3_” is a made up ID that is concatenated to the ${filename} that will add uniqueness to the objects in the bucket, if multiple people are uploading to it.

input
hidden
acl
The access control list. Can be set to:

private – Lets the public user upload a file, but not be able to access it once he uploads. To make it accessible, one has to go into the AWS console and change the rights.

public-read – Lets the public user see it after he has uploaded it or anyone else see it.

input
hidden
AWSAccessKeyId
To get this key id, you need to access it by going to:

Then security credentials:

Get the Access Key ID. This id is also called the AWSAccessKeyId.

You should also grab the “Secret Access Key” by clicking on the “Show” on the adjacent column:

Keep the Secret Access Key private! Only the Access Key ID can be made public.

input
hidden
policy
Policy is a Base64 encoded JSON document that outlines the privileges and details of the files being uploaded. More details about this in the next section.

input
hidden
signature
The signature is the policy, HMAC-encrypted using the Secret Access Key.

input
hidden
content-type
The content type is the what kind of mime content the file that will pass through the form will be.

input
hidden
success_action_redirect
This is the URL of where to go when the upload succeeds. It could be any URL.  Also, when redirected, it will add the three additional URL variables:

bucket=PastaVideos

key=step1_new_submissions%2A192DCAE1-D625-0944-9FBCCD5C6CCB161B_ajax5-loader.mov

etag=%22356060aa56ce8955d38ed8c58661497a%22

Optional Form Fields

IMPORTANT: If you add any other additional form fields, it will throw an error. If there is in fact a need to add extra form fields, which will be pasted to another server, then you must append the prefix “x-ignore-“. Let’s say for example I have three input fields I want S3 to ignore, then do as follows:

<input type=”text” name=”x-ignore-lastname” tabindex=”2″ class=”textfield”>

<input type=”text” name=”x-ignore-address1″ tabindex=”3″ class=”textfield”>

<input type=”text” name=”x-ignore-address2″ tabindex=”4″ class=”textfield”>

<input type=”text” name=”x-ignore-city” tabindex=”5″ class=”textfield”>

This is completely legal and will not throw errors.

Grabbing x-ignore- fields in ColdFusion

If you want to grab these form variables via ColdFusion, do something like

Form.x-ignore-lastname

Will not suffice because of the dashes. You’ll have use the bracket/quotes format:

Form[“x-ignore-lastname”]

to grab them.

Also to check for existence or set a default value,

<cfparam name=”form.x-ignore-lastname” default=”parker” />

Or

<cfparam name=”form[“x-ignore-lastname” default=”parker” />

will not work.

You’ll have to use StructKeyexists( Form, “x-ignore-termsagree” ) to check for existence.

HTML Form Example

Putting all variables together from the previous table, we get something something like as follows:

<input type=”hidden” name=”key” value=”step1_new_submissions/9AAAAAAA-D633-0944-9FBCCCCC6CFB161B_${filename}” />

<input type=”hidden” name=”acl” value=”private” />

<input type=”hidden” name=”AWSAccessKeyId” value=”0N16468ABC47JDAQ2902″ />

<input type=”hidden” name=”policy” value=”eyJleHBpcmF0aW9uIjogIjIwMTgtMTAtMjFUMDA6MDA6MDBaIiwKICAiY29uZGl0aW9ucyI6IFsgCiAgICB7ImJ1Y2tldCI6lyZWN0IjogImh0dHA6Ly90ZXN0LXBhc3RhdmlkZW9zLm1pbGxlbm5pdW13ZWIuY29tL3RoYW5rcy5jZm0ifQogIF0KfQ==” />

<input type=”hidden” name=”signature” value=”2AAAA/BhWMg4CCCCC32fzQ=” />

<input type=”hidden” name=”content-type” value=”video/mov” />

<input type=”hidden” name=”success_action_redirect” value=”http://XYZ.com/thanks.cfm&#8221; />

<!— Ignore All This Stuff… —>
<input type=”text” id=”x-ignore-firstname” name=”x-ignore-firstname” value=”peter” />

<input type=”text” name=”x-ignore-lastname” value=”parker” />

Using Amazon’s HTML POST Form Generator

Because setting up the above HTML for the form could be tricky, Amazon has a tool that easily generates the HTML for the above code.

The following is a screenshot of the tool. The form can be found at: http://s3.amazonaws.com/doc/s3-example-code/post/post_sample.html

So the first thing you do is:

1.       Fill out the IDs:

Where AWS ID = Access Key ID   and AWS Key = Secret Access Key

2.       The next thing we’ll do is fill in the POST URL:

3.       The third step is the trickiest step. This is a JSON document that must adhere to the JSON spec. By default, there’s already a default boilerplate JSON document there. Let’s analyze what it means:

Let’s use one a real one from the test-pastavideos.XYZweb.com page:

You’ll notice that it has content-length-range, which checks the max size, in this case being 1 GB, and it also redirects to the index.cfm page when successful.

After you copy and paste that JSON policy, press “Add Policy”. Notice how the fields in the section “Sample Form Based on the Above Fields” has been populated.

4.       The fields may look something like this:

5.       Now click on Generate HTML:

Note that with the HTML code above, whatever the user uploads will be renamed to “testfile.txt” on S3. To retain the user’s filename, you have to switch it to value=”${filename}”

6.       You then copy that generated HTML and paste it into your page. Add any necessary optional fields with the x-ignore-prefixes.

That should give you a basic template for uploading to S3.

NOTE: You cannot change the values of any <input> fields except except the key. More about this in the next section.

Assigning the a unique ID to the object

Keep in mind of these items when assigning a unique filename:

You cannot change the user’s filename in JavaScript – once the user selects a file from his computer, you cannot append a GUID because JavaScript will not let you set the value of a file textbox, you can only read.

You cannot change append the GUID prefix after the form is submitted, because the filename will go to S3’s server, and there’s no way to run conditional logic once it’s on S3.

To get around these limitations, you generate the GUID or rather a ColdFusion UUID, then append it to your filename. Let’s take a look at an example of the pasta video form:

First let’s show the JSON policy document:

{“expiration”: “2018-10-21T00:00:00Z”,

“conditions”: [

{“bucket”: “PastaVideos”},

[“starts-with”, “$key”, “step1_new_submissions/”],

{“acl”: “private”},

[“starts-with”, “$Content-Type”, “video/mov”],

[“content-length-range”, 0, 10737774],

{“success_action_redirect”: “http://pastavideos.XYZ.com/index.cfm&#8221;}

]

}

And now the HTML / ColdFusion code:

<cfset Data.videoUUID  = CreateUUID() />

<form id=”videoform” action=”http://www.XYZ.com/index.cfm&#8221; method=”get” enctype=”multipart/form-data”>

<!— Start of Amazon S3 specific variables. —>

<input type=”hidden” name=”key”

value=”#Data.key## Data.videoUUIDXX#_${filename}” />

<input type=”hidden” name=”acl” value=”#Data.acl#” />

In the code above, when the HTML is rendered, it will already have filename S3 will use when it’s finished uploading. Remember, this is not the value of the <input type=”file” /> box.  So the user uploads, it will look like this on the AWS Console:

Now why is the action set to http://www.XYZ.com/index.cfm and method set to get? The HTML has this set, but when the page is loaded, JavaScript immediately runs and changes action to http://s3.amazonaws.com/PastaVideos and method to post:

<cfoutput>

<!— Only change the form variables if JavaScript is turned on.  —>

$( “##videoform” ).attr( “action”, “#Variables.postURL#” );

$( “##videoform” ).attr( “method”, “post” );

</cfoutput>

This is so that if JavaScript is turned off, it doesn’t POST to S3.

Other Sources of Information

Amazon S3 POST Example in HTML

http://aws.amazon.com/code/Amazon%20S3/1093?_encoding=UTF8&jiveRedirect=1

AWS

http://aws.amazon.com/developertools/

Helpful Resources

http://wiki.smartfrog.org/wiki/display/sf/Amazon+S3

http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?AccessPolicyLanguage_UseCases_s3_a.html

http://docs.amazonwebservices.com/AmazonS3/latest/gsg/

http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?UsingHTTPPOST.html

http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?HTTPPOSTExamples.html

http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?Introduction.html#S3_ACLs

Test Form

http://s3.amazonaws.com/doc/s3-example-code/post/post_sample.html

Documentation Home Page

http://aws.amazon.com/documentation/s3/

Workflow using WS3 policies

http://docs.amazonwebservices.com/AmazonS3/latest/dev/

Helpful Posts

http://aws.amazon.com/articles/1434?_encoding=UTF8&jiveRedirect=1

https://forums.aws.amazon.com/message.jspa?messageID=89017

https://forums.aws.amazon.com/message.jspa?messageID=88188

Read OPML File in ColdFusion

Whipped out this little script to read an OPML file from Google Reader. Thought it may be handy.

<cfset GoogleOPMLFile = "C:/google-reader-subscriptions.xml" />

<cffile action="READ" variable="xml" file="#GoogleOPMLFile#" /> 

<cfset xmlDoc = XMLParse(xml) /> 

<cfset StartingDataNode = 2 />

<cfset Categories = ArrayLen( xmlDoc.opml.xmlChildren[2].XmlChildren ) />

<cfoutput>

<cfloop index="i" from="2" to="#Categories#">

  <strong>#xmlDoc.opml.xmlChildren[StartingDataNode].XmlChildren[i].XmlAttributes.Title#</strong>
  <ul>
  <cfloop index="j" from="1" to="#ArrayLen( xmlDoc.opml.xmlChildren[StartingDataNode].XmlChildren[i].XmlChildren )#">           
    <li>
      <a href="#xmlDoc.opml.xmlChildren[StartingDataNode].XmlChildren[i].XmlChildren[j].XmlAttributes.htmlURL#">
      #xmlDoc.opml.xmlChildren[StartingDataNode].XmlChildren[i].XmlChildren[j].XmlAttributes.title#</a>
    </li>      
  </cfloop>
  </ul>

</cfloop>

</cfoutput>

The code will display as follows:

High Performance Websites

This was a great read. Certainly learned a lot of useful lessons on front-end optimizations. Written by Steve Souders (previously lead Optimization efforts at Yahoo!), who also developed YSlow! – a Firefox add-on (actually an add-on to the Firefox add-on called Firebug) that gives your web pages a grade from A through F and tells you how to make it better.

If you have a build script, you may be able to automate a lot of these things, like combining JS and CSS files, and optimize PNG files (check out http://www.phpied.com/give-png-a-chance/ to see how to optimize from the command line). If you’re going to optimize JavaScript, I would recommend YUI Compressor (http://developer.yahoo.com/yui/compressor/) since it’s not as greedy as Google’s Closure Compiler for JavaScript. The Closure compiler (http://code.google.com/closure/compiler/) is relatively new and you may get even smaller files, but if your JavaScript is complex, it may have bugs because it’s a greedy compiler.

Anywhoot, here’s what I got from it:

  1. Reduce as many HTTP requests as possible.
  2. Minify JavaScript (don’t obfuscate, because it’s more trouble than it’s worth for the benefits you get)
  3. Minify CSS and optimize it (reduce duplication).
  4. Future-expire resources for caching (PNG, JPG, GIF, JavaScript and CSS).
  5. Minify HTML (get away from tables)
  6. Put CSS at the top, put JavaScript at the bottom.
  7. For design components (background images, button images, nav images), use CSS sprites.
  8. Use PNG for design components (they compress better than GIF, have partial transparency, and can have greater color palettes).
  9. Gzip non-binary data like HTML.
  10. Combine CSS and JavaScript into single files to reduce HTTP requests.

A summary of his optimization rules are found here, but of course, it’s not as detailed as the book: http://stevesouders.com/hpws/rules.php .

Stoyan Stefanov, another prominent developer who’s written tons on JavaScript and optimization, published 24 articles this month on optimization. I find these articles invaluable. It’s recent and he does actual optimization tests and tells you what tools he uses. Here’s the site: http://www.phpied.com/performance-advent-calendar-2009/

Show Time/Date/Weather in Perl

This little console program scrapes from http://www.timeanddate.com/ and prints to the screen. A handy little program I often use to tell the time in other cities.

sub getCityTime {  
  use LWP::Simple;
  my %hash;

  #new york http://www.timeanddate.com/worldclock/city.html?n=179
  #darwin   http://www.timeanddate.com/worldclock/city.html?n=72
  #tokyo    http://www.timeanddate.com/worldclock/city.html?n=248
  #peru     http://www.timeanddate.com/worldclock/city.html?n=131
  #         http://www.timeanddate.com/worldclock/results.html?query=darwin

  
  $url = "http://www.timeanddate.com/worldclock/city.html?n=";
  
  %hash = ( 'newyork', '179', 
            'darwin',   '72', 
            'tokyo',    '248', 
            'lima',     '131'
          );
  
  my $html = get($url.$hash{$_[0]})
    or die "Couldn't connect to the timeanddate.com website.";
    
  $html =~ m!<span>([\s\w,;']*)</span>!;
  print "Location ======&gt; ".$1."\n";  

  $html =~ m!<strong>([\s\w:,]*)</strong>!;
  print "Date/Time =====&gt; ".$1."\n";  

  $html =~ m!<strong>(\d{1,3})[\s\w:,;&amp;]*</strong>!;
  print "Temperature ===&gt; ".$1.chr(167)."F\n";  

}

# MAIN

if ($ARGV[0] ne "") {
  getCityTime( $ARGV[0] );
}
else {

  print "\nDisplays the time from TimeAndDate.com, based on city.\n\n";
  
  getCityTime( "newyork" );
  print "==============================\n";
  getCityTime( "lima" );
  print "==============================\n";
  getCityTime( "tokyo" );
  print "==============================\n";
  getCityTime( "darwin" );  

}

Java Pairs Well with Which Database?

In the same way there’s a tight bond between MySQL and PHP, SQL Server and ASP.NET, SQL Server and ColdFusion – what goes well with Java? Oracle? Being curious, I started searching in employment web sites. I searched for “Java” and one of these databases: Oracle, MySQL, SQL Server and PostgreSQL. (I put in “SQL Server” using quotes.) The sites used were: craigslist, Monster.com, Dice.com, and Yahoo! Hotjobs.

The numbers signify how many job entries were returned.

So it does seem Oracle goes with Java. Also I noticed how many people call “SQL Server” just “SQL.” Sort of confusing and hard to tell if they’re referring to the platform or language.

Convert Minutes to Hours

I often use both Winamp and my iPhone to listen to music. These two, unfortunately, show the time differently in the songs. Winamp displays the time in minutes (mm) while the iPhone does it hour/minutes (hh:mm). Here’s a quick little script I whipped together because I’m too lazy to do this in my head, especially for audio books where an audio book can be over 500 minutes, and I need to convert to iPhone time because I want to continue listening where I had just left off on Winamp.

use POSIX qw(ceil floor); # used for the floor function

sub GetToken {
  # @_ = flatten args list from an array
  # @_[0] = first argument
  
  $data      = @_[0];
  $delimiter = @_[1];
  $token     = @_[2] - 1;
    
  @tokens_array = split($delimiter, $data);   
  
  return @tokens_array[$token]; 
}

sub chr_conver_min {  
  if (length(@_[0]) == 1) {
    return "0".@_[0];
  }
  else {
    return @_[0];
  }   
}


sub iphone_time_convert {

  # converts winamp time to iphone - winamp stores time only in minutes.  
  # @_[0]   =  winamp_time, e.g. 124:34
  # $hour   = floor($winamp_time/60);
  # $minute = $winamp_time % 60;

  $winamp_hour_min = GetToken(@_[0], ":", 1);  
  $winamp_seconds  = GetToken(@_[0], ":", 2);  
  
  return floor($winamp_hour_min/60).":".chr_conver_min( ($winamp_hour_min % 60) ).":".$winamp_seconds;
  
}


sub winamp_time_convert {  

  # converts iphone time to winamp  
  # @_[0] = iphone_time, e.g. 3:43:34    
  $iphone_hour     = GetToken(@_[0], ":", 1);  
  $iphone_min      = GetToken(@_[0], ":", 2);    
  $iphone_seconds  = GetToken(@_[0], ":", 3);
    
  return (($iphone_hour * 60) + $iphone_min).":".$iphone_seconds;
  
}

sub show_help {
  print "\nDisplays the conversion of time between winamp and iPhone.\n\n";
  print "   winamptime [-w2i|-i2p] [mm:ss][hh:mm:ss]\n\n";
  print "Example to convert winamp time to iPhone: \n\n";
  print "   winamptime -w2i 212:41\n\n";
  print "Example to convert iPhone time to winamp, seconds being optional: \n\n";
  print "   winamptime -i2w 2:31:41\n";
  print "   winamptime -i2w 2:31\n\n";
}


# START

# Optimize this:
if( $ARGV[0] eq "-w2i" ) 
{
  # winamp to iphone time
  if ( length($ARGV[1]) > 0 ) {
    print "iPhone Time: ".iphone_time_convert( $ARGV[1] )."\n";
  }
}
elsif( $ARGV[0] eq "-i2w" ) 
{
  # iphone to winamp time
  if ( length($ARGV[1]) > 0 ) {
    print "Winamp Time: ".winamp_time_convert( $ARGV[1] )."\n";
  }
}
else 
{
  show_help();
}

Output:

Compress and Move Log Files

Sometimes log files bog a system down. For one of our servers, I made this little Python script that compresses (via WinRAR) the log files in a directory, and then moves them to a backup location. The only little catch is that I wanted to leave the latest log files for in that directory. Log files are created daily, so the the latest log files have a datestamp of today. Here’s how I did it.

First Create the Python Script:

import os
import datetime

dateStamp  = datetime.datetime.now().strftime("%Y-%m-%d") 
imsLogPath = 'd:\\LogFiles\\'                     
# Don't use a mapped drive but use UNC for network drives. Task Schedule seems to choke when it calls Python.
newRARPath = '"\\\\192.168.1.2\\Root\\backups\\' + dateStamp + '.rar"'
rarPath    = '"C:\\Program Files\\WinRAR\\rar.exe" a -m5 ' + newRARPath 

# Get Latest Files
smtpLatest   = os.popen(r"dir /od /a-d /b " + imsLogPath + "SMTP*.log").read().splitlines()[-1]
postLatest   = os.popen(r"dir /od /a-d /b " + imsLogPath + "POST*.log").read().splitlines()[-1]
ischedLatest = os.popen(r"dir /od /a-d /b " + imsLogPath + "iSched*.log").read().splitlines()[-1]
relayLatest  = os.popen(r"dir /od /a-d /b " + imsLogPath + "Relay*.log").read().splitlines()[-1]
qengLatest   = os.popen(r"dir /od /a-d /b " + imsLogPath + "Qeng*.log").read().splitlines()[-1]

# Get List of All Files
allFiles     = os.popen(r"dir /od /a-d /b " + imsLogPath + "*.log").read().splitlines()

# Remove Latest Files from All Files List
allFiles.remove( smtpLatest )
allFiles.remove( postLatest )
allFiles.remove( ischedLatest )
allFiles.remove( relayLatest )
allFiles.remove( qengLatest )

# allFiles Array Has the list of files

# Flatten Array allFiles to be used as a parameter in system command
flatLogPathList = ""
for filenameWithPath in allFiles:
  flatLogPathList = flatLogPathList + imsLogPath + filenameWithPath + " "


# Execute WinRar
path = rarPath + " " + flatLogPathList.rstrip()
os.system( '"' + path + '"' )

# Delete all log files
os.system( '"del ' + flatLogPathList.rstrip() + '"' )

Then I set up the Scheduled Task:

With these Settings:

Get Latest File

In my last post, I made a quick script that checks for the date. It was very limiting, since it used the dir command. This one uses several date/time Python modules and is more capable.

import os, os.path, stat, time
from datetime import date, timedelta, datetime

# Reference
# http://docs.python.org/library/datetime.html
# http://docs.python.org/library/time.html

def getFileDate( filenamePath ):    
  
  used = os.stat( filenamePath ).st_mtime      
  year, day, month, hour, minute, second = time.localtime(used)[:6]
  objDateTime = datetime(year, day, month, hour, minute, second)
  
  return objDateTime
  
  # Ways to reference this DateTime Object
  # objDateTime.strftime("%Y-%m-%d %I:%M %p")
  # objDateTime.year
  # objDateTime.month
  

def isDaysOldFromNow( filenamepath, days ):
  
  # Checks how old a file is. Is it older than "days" [variable] days?
  inTimeRange = False  
  timeDeltaDiff = ( datetime.now()-getFileDate( filenamepath ) ).days
  
  # Check if the file's date is days old or less:
  if ( timeDeltaDiff >= days ):
    inTimeRange = True  
    
  return inTimeRange

fname = "C:/temp/decision2.pdf"  

# Set this variable to check if the file is this days old
howOld = 3


if ( isDaysOldFromNow( fname, howOld ) ):
  print fname, "is more than", howOld, "days old"
else:
  print fname, "is NOT more than", howOld, "days old"

Output:

Convert Relative URLs to Absolute

I put this ColdFusion UDF together the other day to turn relative URLs to Absolute. Code is pretty straightforward.

<cffunction name="URLRelativeToAbsolute" returntype="string"
  hint="Converts relative URLs in an element and converts to absolute. It includes the http:// protocol prefix.">  
  
  <cfargument name="content" type="string" required="true" hint="HTML content that will be scanned and replaced." />

  <cfargument name="domain" type="string" required="true" hint="Add domain name to relative links." />
  
  <cfset var local = StructNew() /> 
  
  <!--- The following regexp handles the following elements: link, a, img, script, form, frame. --->
  <cfset local.contentFixed = REReplaceNoCase( Arguments.content, "(href|src|action)=""/?((\./)|(\.\./)+|)(?=[^http])", "\1=""http://" & domain & "/", "all" ) />  
  
  <!--- The following regexp handles the url() attribute of the background CSS property. --->
  <cfset local.contentFixed = REReplaceNoCase( local.contentFixed, "url\((\s)?(')?/?((\./)|(\.\./)+|)(?=[^http])", "url(\2http://" & domain & "/", "all" ) />  

  <cfreturn local.contentFixed />    

</cffunction>

Usage:

<cfsavecontent variable="htmlContent">
<textarea name="data" rows="20" cols="60">
  <style>
    body { 
      background-image:url('stars.png');
      background-image:url('../stars.png');
      background-image:url('/stars.png');
      background-image:url('/../../../stars.png');
    }
  </style>
  <a href="../../../images/shiny.jpg">Shiny</a>
  <a href="http://www.google.com">This should not be touched</a>
  <img border="0" src="/images/cool.png" /> 
  <link rel="index" href="../../index.asp">
  <form method="POST" action="cgi/processing.cgi"></form>  
</textarea>
</cfsavecontent>

<cfoutput>

  #htmlContent#
  #URLRelativeToAbsolute( htmlContent, "www.shinylight.com" )#
  
</cfoutput>

Result: