Skip to content
Latest

Encrypt In ColdFusion and Decrypt in PHP

A while ago, I had to integrate vBulletin into a CMS application. The CMS was done in ColdFusion and had to pass info encrypted data over to vBulletin which is PHP. It would be naive to think that one could just pick an encryption scheme (which are standards like Blowfish, AES, SHA, etc.) and look up in the reference manual for both languages … and that would be it? Ha! While testing these, I realized that each language has its own options (or quirks; however you want to look at it). In addition, you have to consider Base64 encoding and cipher blocks.

Here’s what I used for ColdFusion to do encryption. I’m listing two methods from two different sources

Method 1: Encryption uses blowfish with base64 encoding, using the block cipher mode ECB and Java objects.

<cffunction name="EncryptBlowfish1" returntype="string" hint="Encrypts string using Blowfish Algorithm"
  description="Encryption uses blowfish with base64 encoding, using the block cipher mode ECB.">

  <!--- This function serves as a wrapper for enhanced Blowfish encryption. Based on discussion --->
  <!--- from http://www.experts-exchange.com/Web_Development/Web_Languages-Standards/Cold_Fusion_Markup_Language/Q_24753297.html --->

  <cfargument name="Data" required="true" type="string" hint="Text to encrypt" />  
  <cfargument name="Key" required="true" type="string" hint="16-char key to be used for encryption." />
  
  <!--- get a cipher instance --->
  <cfset Cipher = createObject("java", "javax.crypto.Cipher")>
  <cfset encryptor = Cipher.getInstance("Blowfish/ECB/PKCS5Padding")>
  
  <!--- must convert the key string into a KeySpec object first --->
  <cfset keySpec = createObject("java", "javax.crypto.spec.SecretKeySpec").init(Arguments.Key.getBytes(), "Blowfish")>
  
  <!--- initialize the cipher for encrypting --->
  <cfset encryptor.init(Cipher.ENCRYPT_MODE, keySpec) />
  
  <!--- do the encrypt --->
  <cfset encryptedTextFromJava = encryptor.doFinal(Arguments.Data.getBytes()) />
  
  <!--- finally convert it to base64 and return --->  
  <cfreturn BinaryEncode(encryptedTextFromJava, "Base64") />

</cffunction>

Method 2: Much faster.

<!--- THIS IS MUCH FASTER THAN EncryptBlowfish1 --->
<cffunction name="EncryptBlowfish2" returntype="string" hint="Encrypts string using Blowfish Algorithm"
  description="Encryption uses blowfish with base64 encoding, using the block cipher mode ECB.">

  <!--- This function serves as a wrapper for enhanced Blowfish encryption. Based on discussion --->
  <!--- from http://www.petefreitag.com/item/222.cfm --->

  <cfargument name="Data" required="true" type="string" hint="Text to encrypt" />  
  <cfargument name="Key" required="true" type="string" hint="16-char key to be used for encryption." />

  <cfreturn Encrypt( Arguments.Data, ToBase64( Arguments.Key ), "BLOWFISH", "Base64" ) />  

</cffunction>

Now let’s test it!

<!--- YOU'LL NEED THIS TO DECRYPT! --->
<cfset myKey = "dan is too uber!" />

<cfoutput>
#EncryptBlowfish1( "This is so friggin cool that it works!", myKey )#
<hr />
#EncryptBlowfish2( "This is so friggin cool that it works!", myKey )#
</cfoutput>

This is how you would decrypt it in PHP:

$__key__ = "dan is too uber!";

function DecryptBlowfish( $data, $key )
{
  // Encryption uses blowfish with base64 encoding, using the block cipher mode ECB 
  // to decrypt the $data string.
  //    $data = the text to be decrypted.
  //    $key  = the key used to be used, not base64 encoded.  
  
  return mcrypt_decrypt( MCRYPT_BLOWFISH, $key, base64_decode( $data ), MCRYPT_MODE_ECB );
}

Model Binding Checkboxes to a List

Model binding things other than textboxes could be tricky in ASP.NET MVC. To correctly model bind, one has to follow a specific naming/value conventions in the HTML.

Let’s say we have a model object called People, and we have a List of states people have lived in.

  public class People
  {
    public List StatesLiveIn { get; set; }
  }

And in our View we have as follows:

@model TestBinder.Models.People

foreach (string state in new List() { "NY","NJ","FL","IL","TX" })
{     
  @state <br />      
}

Out controller simple looks like this:

    public ActionResult Index()
    {
      return View();
    }

    [HttpPost]
    public ActionResult Index(People p)
    {
      return View(p);
    }

If we put a trace point at the return line, and inspect after the user has submitted only NY and TX…

…we’ll see that it was correctly bound…

Asynchronous Upload to Amazon S3 (S3)

Requirements

Make sure you have an Amazon S3 Account (get it from tech). JavaScript is mandatory for this to work (to be able to POST to two different domains) upon user submission.

Summary of Problem

This is a summary outlining the solution used in a video uploader. It entailed a form that would leverage the user of our Amazon S3 (S3) account. In addition, because the video files could be large (and to avoid CF limitations), a critical requirement was to upload to that page S3 directly. At the same time, the form field information had to be saved to our database. This meant doing a double post – one to the XYZ domain, and the other to s3.amazon.com. This cross-domain POST could only be done via AJAX.

Here’s visualization:

As you can see, once the user clicks “Submit”, there’s an AJAX HTTP POST to XZY server to save the fields to the database, and then the JavaScript runs a form.submit() on the current form to submit the file via POST to S3.

Introducing Amazon S3

Amazon S3 (Simple Storage Service) is a cloud service whose sole purpose is to store files. To store files into it, one can use its REST API or Admin Web Console (AWS).

Screenshot of the AWS console, with Amazon S3 tab selected.

One gets to the AWS console via an account (can get it from tech) and going to http://aws.amazon.com/s3/ and clicking “Sign in.”

While S3 has many advantages, there are a set of drawbacks as well. To summarize, here’s a list:

Benefits:

  • The max upload size for a file is 5GB. Plenty.
  • For all the times I’ve tested, uploads of different file sizes, everything has gone super smoothly – like butter – so definitely reliable.
  • Amazon is super scalable (as you may already know), so parallel uploading from one or many users is really no problem.
  • Would not affect performance of our servers – there could be many uploads, and they would go fast, without slowing down any other web sites on servers.
  • The speed of the upload is limited to the user’s computer’s specs and internet provider – much faster than our servers.
  • Files can be made secure and unreadable, not just through obscurity – this is sometimes tricky to implement in ColdFusion.

Drawbacks:

To summarize, the reason for some of the drawbacks, is because it’s doing a POST request directly from one domain (us) to another (s3.amazonaws.com). It’s not being channeled through our CF servers.

There are two ways to interact with S3: the REST API, and doing a direct POST. With the REST API, the upload data has to be channeled through a server first before sending to Amazon – this was not what we were looking for, since our servers have issues with large files. So we looked into removing ourselves as the middleman and sending the data directly to S3 – via POST.

Here are the drawbacks, mainly three:

  • If S3 detected an error in the upload, e.g. if the file is too large, there’s no default error page, just a redirect to an XML document hosted on s3.amazonaws.com. There’s no way to set an error page – it’s on Amazon’s to-do list for future release. One can’t even customize the look and feel of the XML document you’re redirected to. Side note: if the upload was successful, it gets redirected to a page you specify (at least there’s some control here).
  • Progress bar reusable code is scare. There’s tons of code out there to do this, however, I could not find one that could cross-domain post. With traditional AJAX, you’re only allowed to do a POST/GET if the URL you’re using is the same domain as the caller page. One could get the code for a progress bar plugin (as there are tons out there) and rewrite it to do a POST and work with Amazon S3 – but that would take a considerate amount of work.
  • Lack of documentation. There’s not enough documentation for handling POST requests in the official Amazon Developer docs, which makes troubleshooting difficult. Doing POST submits is a relatively new feature of Amazon S3, compared to the rest of their APIs.

So the largest hurdle is to code functionality to get around the error page, since some JavaScript magic has to be put in place. That would be another day or so of work just for that, I believe. I already have some code in place that I put together while testing. If we left it as-is, when the user uploads, and if there was an error, the user would see something like this:

Which would, of course, be nonsensical. If the file was too large, they would see a message that the file was too large within the message tags. The user would then have to hit back to return to the form.

We can try another way, probably the easiest. When the user hits submits, it starts showing the animated spinner as it’s uploading. Also, we can tell the user that if he encounters an error page, just hit the back button. Also, keep in mind that there’ll be validation in place before the upload to check for file extension, at the very least. The only edge case to seeing that XML error message is if the file they submitted is over the limit *AND*  they have JavaScript turned off (that overrides the JavaScript file extension validation).

Creating a Basic HTML that POSTs a File to Amazon

Step 1 – Create a Bucket / Folder / Object:
The first thing we need to do a is create a Bucket on S3. To do this the easy way, go to the AWS: https://console.aws.amazon.com/s3/home and create a bucket:

Buckets are where you store your objects (i.e. your files of any format). You can create a folder for further organization:

As you can see here, there are 4 folders here. We can double-click on step1_new_submissions folder and see the objects that are contained within this:

You can right-click on one of those objects (files) and click “Properties”:

Notice that a Properties panel will expand below. To the right, you have three tabs: Details, Permissions, Metadata.

If you click on the Permissions Tab you’ll notice that by default the file that was selected has following permissions set by user USERNAME:

Go back to the details tab and click on the Link:

You’ll notice that you’ll be taken to an XML document in your browser that has the following:

It’s because you have not let it public access. To give it public access, you click on the Permissions tab again, and click “Add more permissions” , set the Grantee to “Everyone” and choose Open/Download and Save.

You can also set the make multiple objects public. Select an object, hold the SHIFT key, then select the other object to select the objects in between. Select “Make Public”:

You can also upload an object manually via the “Upload” button:

Then click “Add more files”

As the file starts uploading, you’ll see the bottom panel show details about the file transfer:

Step 2: Setting up the HTML Form

The S3 REST API is very flexible, as long as you execute the proper method from your application, while at the same time sending the file over from your server to S3 (via a REST method with the correct URI). Traditionally, it would look like this:

Notice how there’s a middle-man that serves as the captor of the data submitted by the form and the file. Then, it sends it long to S3. The middle-man here is crucial. Double the total bandwidth is spent here – the bandwidth to go from the user’s machine to the web server (in this case CF), and then the bandwidth spent transferring the file to S3.

The advantage to this layout is that because the web server acts as a middle-man server, it can modify the data, change its filename, and slice-and-dice anything within the file because the file submitted has to go through it first. Once the middle-man is done, then it sends it to the S3. Drawback is that there’s wasted resources from the middle-man, not to mention there may be limitations on the middle-man to handle large files > 1GB .

As a solution, S3 has a POST method solution where you can POST the file directly to S3:

Setting up the Form tag

Let’s see how we can cross-domain (a domain other than ours) to S3. Rather than doing the following (to post to the same domain):


We do the following:


Where “PastaVideos” is the name of the bucket.

The format of the object URI is as follows:

Step 3: Setting up the Other Form Fields

This is where things get interesting. In order to set up an HTML form that can upload straight to S3, there’s a set of required input fields in the form. They are as follows:


Optional Form Fields

IMPORTANT: If you add any other additional form fields, it will throw an error. If there is in fact a need to add extra form fields, which will be pasted to another server, then you must append the prefix “x-ignore-“. Let’s say for example I have three input fields I want S3 to ignore, then do as follows:





This is completely legal and will not throw errors.

Grabbing x-ignore- fields in ColdFusion

If you want to grab these form variables via ColdFusion, do something like

Form.x-ignore-lastname

Will not suffice because of the dashes. You’ll have use the bracket/quotes format:

Form[“x-ignore-lastname”]

to grab them.

Also to check for existence or set a default value,


Or


will not work.

You’ll have to use StructKeyexists( Form, “x-ignore-termsagree” ) to check for existence.

HTML Form Example

Putting all variables together from the previous table, we get something something like as follows:

  





      
<!--- Ignore All This Stuff... -->

  

Using Amazon’s HTML POST Form Generator

Because setting up the above HTML for the form could be tricky, Amazon has a tool that easily generates the HTML for the above code.

The following is a screenshot of the tool. The form can be found at: http://s3.amazonaws.com/doc/s3-example-code/post/post_sample.html


So the first thing you do is:

1. Fill out the IDs:

Where AWS ID = Access Key ID   and AWS Key = Secret Access Key

2. The next thing we’ll do is fill in the POST URL:

3. The third step is the trickiest step. This is a JSON document that must adhere to the JSON spec. By default, there’s already a default boilerplate JSON document there. Let’s analyze what it means:

Let’s use one a real one from the test-pastavideos.XYZweb.com page:

You’ll notice that it has content-length-range, which checks the max size, in this case being 1 GB, and it also redirects to the index.cfm page when successful.

After you copy and paste that JSON policy, press “Add Policy”. Notice how the fields in the section “Sample Form Based on the Above Fields” has been populated.

4. The fields may look something like this:

5. Now click on Generate HTML:

Note that with the HTML code above, whatever the user uploads will be renamed to “testfile.txt” on S3. To retain the user’s filename, you have to switch it to value=”${filename}”

6. You then copy that generated HTML and paste it into your page. Add any necessary optional fields with the x-ignore-prefixes.

That should give you a basic template for uploading to S3.

NOTE: You cannot change the values of any <input> fields except except the key. More about this in the next section.

Assigning the a unique ID to the object

Keep in mind of these items when assigning a unique filename:

  • You cannot change the user’s filename in JavaScript – once the user selects a file from his computer, you cannot append a GUID because JavaScript will not let you set the value of a file textbox, you can only read.
  • You cannot change append the GUID prefix after the form is submitted, because the filename will go to S3’s server, and there’s no way to run conditional logic once it’s on S3.

To get around these limitations, you generate the GUID or rather a ColdFusion UUID, then append it to your filename. Let’s take a look at an example of the pasta video form:

First let’s show the JSON policy document:

{"expiration": "2018-10-21T00:00:00Z",
  "conditions": 
  [ 
    {"bucket": "PastaVideos"}, 
    ["starts-with", "$key", "step1_new_submissions/"],
    {"acl": "private"},
    ["starts-with", "$Content-Type", "video/mov"],
    ["content-length-range", 0, 10737774],
    {"success_action_redirect": "http://pastavideos.XYZ.com/index.cfm"}
  ]
}

And now the HTML / ColdFusion code:

        
 

<!--- Start of Amazon S3 specific variables. -->
   

In the code above, when the HTML is rendered, it will already have filename S3 will use when it’s finished uploading. Remember, this is not the value of the <input type=”file” /> box.  So the user uploads, it will look like this on the AWS Console:

Now why is the action set to http://www.XYZ.com/index.cfm and method set to get? The HTML has this set, but when the page is loaded, JavaScript immediately runs and changes action to http://s3.amazonaws.com/PastaVideos and method to post:


  <!--- Only change the form variables if JavaScript is turned on.  -->  
  $( "##videoform" ).attr( "action", "#Variables.postURL#" ); 
  $( "##videoform" ).attr( "method", "post" );     

This is so that if JavaScript is turned off, it doesn’t POST to S3.

Other Sources of Information

Amazon S3 POST Example in HTML

AWS

Helpful Resources

Test Form

Documentation Home Page

Workflow using WS3 policies

Helpful Posts

Asynchronous Upload to Amazon S3 (S3)

Last Updated: 11/29/2010

Author: Dan Romero

Table of Contents

Summary of Problem.. 2

Introducing Amazon S3. 2

Benefits: 3

Drawbacks: 3

Creating a Basic HTML that POSTs a File to Amazon. 5

Step 1 – Create a Bucket / Folder / Object: 5

Step 2: Setting up the HTML Form.. 9

Setting up the Form tag. 10

Step 3: Setting up the Other Form Fields. 11

Optional Form Fields. 13

Grabbing x-ignore- fields in ColdFusion. 13

HTML Form Example. 14

Using Amazon’s HTML POST Form Generator. 14

Assigning the a unique ID to the object. 18

Other Sources of Information. 19

Requirements

Make sure you have an Amazon S3 Account (get it from tech).

JavaScript is mandatory for this to work (to be able to POST to two different domains) upon user
submission.

Summary of Problem

This is a summary outlining the solution used in a video uploader. It entailed a form that would leverage the user of our Amazon S3 (S3) account. In addition, because the video files could be large (and to avoid CF limitations), a critical requirement was to upload to that page S3 directly. At the same time, the form field information had to be saved to our database. This meant doing a double post – one to the XYZ domain, and the other to s3.amazon.com. This cross-domain POST could only be done via AJAX.

Here’s visualization:

As you can see, once the user clicks “Submit”, there’s an AJAX HTTP POST to
XZY server to save the fields to the database, and then the JavaScript runs a
form.submit() on the current form to submit the file via POST to S3.

Introducing Amazon S3

Amazon S3 (Simple Storage Service) is a cloud service whose sole purpose is to store files. To store files into it, one can use its REST API or Admin Web Console (AWS).

Screenshot of the AWS console, with Amazon S3 tab selected.

One gets to the AWS console via an account (can get it from tech) and going to http://aws.amazon.com/s3/ and clicking “Sign in.”

While S3 has many advantages, there are a set of drawbacks as well. To summarize, here’s a list:

Benefits:

The max upload size for a file is 5GB. Plenty.

For all the times I’ve tested, uploads of different file sizes, everything has gone super smoothly – like butter – so definitely reliable.

Amazon is super scalable (as you may already know), so parallel uploading from one or many users is really no problem.

Would not affect performance of our servers – there could be many uploads, and they would go fast, without slowing down any other web sites on servers.

The speed of the upload is limited to the user’s computer’s specs and internet provider – much faster than our servers.

Files can be made secure and unreadable, not just through obscurity – this is sometimes tricky to implement in ColdFusion.

Drawbacks:

To summarize, the reason for some of the drawbacks, is because it’s doing a POST request directly from one domain (us) to another (s3.amazonaws.com). It’s not being channeled through our CF servers.

There are two ways to interact with S3: the REST API, and doing a direct POST. With the REST API, the upload data has to be channeled through a server first before sending to Amazon – this was not what we were looking for, since our servers have issues with large files. So we looked into removing ourselves as the middleman and sending the data directly to S3 – via POST.

Here are the drawbacks, mainly three:

If S3 detected an error in the upload, e.g. if the file is too large, there’s no default error page, just a redirect to an XML document hosted on s3.amazonaws.com. There’s no way to set an error page – it’s on Amazon’s to-do list for future release. One can’t even customize the look and feel of the XML document you’re redirected to. Side note: if the upload was successful, it gets redirected to a page you specify (at least there’s some control here).

Progress bar reusable code is scare. There’s tons of code out there to do this, however, I could not find one that could cross-domain post. With traditional AJAX, you’re only allowed to do a POST/GET if the URL you’re using is the same domain as the caller page. One could get the code for a progress bar plugin (as there are tons out there) and rewrite it to do a POST and work with Amazon S3 – but that would take a considerate amount of work.

Lack of documentation. There’s not enough documentation for handling POST requests in the official Amazon Developer docs, which makes troubleshooting difficult. Doing POST submits is a relatively new feature of Amazon S3, compared to the rest of their APIs.

So the largest hurdle is to code functionality to get around the error page, since some JavaScript magic has to be put in place. That would be another day or so of work just for that, I believe. I already have some code in place that I put together while testing. If we left it as-is, when the user uploads, and if there was an error, the user would see something like this:

Which would, of course, be nonsensical. If the file was too large, they would see a message that the file was too large within the message tags. The user would then have to hit back to return to the form.

We can try another way, probably the easiest. When the user hits submits, it starts showing the animated spinner as it’s uploading. Also, we can tell the user that if he encounters an error page, just hit the back button. Also, keep in mind that there’ll be validation in place before the upload to check for file extension, at the very least. The only edge case to seeing that XML error message is if the file they submitted is over the limit *AND*  they have JavaScript turned off (that overrides the JavaScript file extension validation).

Creating a Basic HTML that POSTs a File to Amazon

Step 1 – Create a Bucket / Folder / Object:
The first thing we need to do a is create a Bucket on S3. To do this the easy way, go to the AWS: https://console.aws.amazon.com/s3/home  and create a bucket:

Buckets are where you store your objects (i.e. your files of any format). You can create a folder for further organization:

As you can see here, there are 4 folders here. We can double-click on step1_new_submissions folder and see the objects that are contained within this:

You can right-click on one of those objects (files) and click “Properties”:

Notice that a Properties panel will expand below. To the right, you have three tabs: Details, Permissions, Metadata.

If you click on the Permissions Tab you’ll notice that by default the file that was selected has following permissions set by user USERNAME:

Go back to the details tab and click on the Link:

You’ll notice that you’ll be taken to an XML document in your browser that has the following:

It’s because you have not let it public access. To give it public access, you click on the Permissions tab again, and click “Add more permissions” , set the Grantee to “Everyone” and choose Open/Download and Save.

You can also set the make multiple objects public. Select an object, hold the SHIFT key, then select the other object to select the objects in between. Select “Make Public”:

You can also upload an object manually via the “Upload” button:

Then click “Add more files”

As the file starts uploading, you’ll see the bottom panel show details about the file transfer:

Step 2: Setting up the HTML Form

The S3 REST API is very flexible, as long as you execute the proper method from your application, while at the same time sending the file over from your server to S3 (via a REST method with the correct URI). Traditionally, it would look like this:

Notice how there’s a middle-man that serves as the captor of the data submitted by the form and the file. Then, it sends it long to S3. The middle-man here is crucial. Double the total bandwidth is spent here – the bandwidth to go from the user’s machine to the web server (in this case CF), and then the bandwidth spent transferring the file to S3.

The advantage to this layout is that because the web server acts as a middle-man server, it can modify the data, change its filename, and slice-and-dice anything within the file because the file submitted has to go through it first. Once the middle-man is done, then it sends it to the S3. Drawback is that there’s wasted resources from the middle-man, not to mention there may be limitations on the middle-man to handle large files > 1GB .

As a solution, S3 has a POST method solution where you can POST the file directly to S3:

Setting up the Form tag

Let’s see how we can cross-domain (a domain other than ours) to S3. Rather than doing the following (to post to the same domain):

<form action=”#CGI.SCRIPT_NAME#” method=”post” enctype=”multipart/form-data”>

We do the following:

<form action=”http://s3.amazonaws.com/PastaVideos” method=”post” enctype=”multipart/form-data”>

Where “PastaVideos” is the name of the bucket.

The format of the object URI is as follows:

Step 3: Setting up the Other Form Fields

This is where things get interesting. In order to set up an HTML form that can upload straight to S3, there’s a set of required input fields in the form. They are as follows:

Tag
Type
Name
Value

input
hidden
key
The location of the object you’ll be uploading. You can consider it as the concatenation of the folder(s) and the filename (don’t use the bucket name):

step1_new_submissions/4059C3_${filename}

The ${filename} is the original filename of the file the user is uploading. The “4059C3_” is a made up ID that is concatenated to the ${filename} that will add uniqueness to the objects in the bucket, if multiple people are uploading to it.

input
hidden
acl
The access control list. Can be set to:

private – Lets the public user upload a file, but not be able to access it once he uploads. To make it accessible, one has to go into the AWS console and change the rights.

public-read – Lets the public user see it after he has uploaded it or anyone else see it.

input
hidden
AWSAccessKeyId
To get this key id, you need to access it by going to:

Then security credentials:

Get the Access Key ID. This id is also called the AWSAccessKeyId.

You should also grab the “Secret Access Key” by clicking on the “Show” on the adjacent column:

Keep the Secret Access Key private! Only the Access Key ID can be made public.

input
hidden
policy
Policy is a Base64 encoded JSON document that outlines the privileges and details of the files being uploaded. More details about this in the next section.

input
hidden
signature
The signature is the policy, HMAC-encrypted using the Secret Access Key.

input
hidden
content-type
The content type is the what kind of mime content the file that will pass through the form will be.

input
hidden
success_action_redirect
This is the URL of where to go when the upload succeeds. It could be any URL.  Also, when redirected, it will add the three additional URL variables:

bucket=PastaVideos

key=step1_new_submissions%2A192DCAE1-D625-0944-9FBCCD5C6CCB161B_ajax5-loader.mov

etag=%22356060aa56ce8955d38ed8c58661497a%22

Optional Form Fields

IMPORTANT: If you add any other additional form fields, it will throw an error. If there is in fact a need to add extra form fields, which will be pasted to another server, then you must append the prefix “x-ignore-“. Let’s say for example I have three input fields I want S3 to ignore, then do as follows:

<input type=”text” name=”x-ignore-lastname” tabindex=”2″ class=”textfield”>

<input type=”text” name=”x-ignore-address1″ tabindex=”3″ class=”textfield”>

<input type=”text” name=”x-ignore-address2″ tabindex=”4″ class=”textfield”>

<input type=”text” name=”x-ignore-city” tabindex=”5″ class=”textfield”>

This is completely legal and will not throw errors.

Grabbing x-ignore- fields in ColdFusion

If you want to grab these form variables via ColdFusion, do something like

Form.x-ignore-lastname

Will not suffice because of the dashes. You’ll have use the bracket/quotes format:

Form[“x-ignore-lastname”]

to grab them.

Also to check for existence or set a default value,

<cfparam name=”form.x-ignore-lastname” default=”parker” />

Or

<cfparam name=”form[“x-ignore-lastname” default=”parker” />

will not work.

You’ll have to use StructKeyexists( Form, “x-ignore-termsagree” ) to check for existence.

HTML Form Example

Putting all variables together from the previous table, we get something something like as follows:

<input type=”hidden” name=”key” value=”step1_new_submissions/9AAAAAAA-D633-0944-9FBCCCCC6CFB161B_${filename}” />

<input type=”hidden” name=”acl” value=”private” />

<input type=”hidden” name=”AWSAccessKeyId” value=”0N16468ABC47JDAQ2902″ />

<input type=”hidden” name=”policy” value=”eyJleHBpcmF0aW9uIjogIjIwMTgtMTAtMjFUMDA6MDA6MDBaIiwKICAiY29uZGl0aW9ucyI6IFsgCiAgICB7ImJ1Y2tldCI6lyZWN0IjogImh0dHA6Ly90ZXN0LXBhc3RhdmlkZW9zLm1pbGxlbm5pdW13ZWIuY29tL3RoYW5rcy5jZm0ifQogIF0KfQ==” />

<input type=”hidden” name=”signature” value=”2AAAA/BhWMg4CCCCC32fzQ=” />

<input type=”hidden” name=”content-type” value=”video/mov” />

<input type=”hidden” name=”success_action_redirect” value=”http://XYZ.com/thanks.cfm&#8221; />

<!— Ignore All This Stuff… —>
<input type=”text” id=”x-ignore-firstname” name=”x-ignore-firstname” value=”peter” />

<input type=”text” name=”x-ignore-lastname” value=”parker” />

Using Amazon’s HTML POST Form Generator

Because setting up the above HTML for the form could be tricky, Amazon has a tool that easily generates the HTML for the above code.

The following is a screenshot of the tool. The form can be found at: http://s3.amazonaws.com/doc/s3-example-code/post/post_sample.html

So the first thing you do is:

1.       Fill out the IDs:

Where AWS ID = Access Key ID   and AWS Key = Secret Access Key

2.       The next thing we’ll do is fill in the POST URL:

3.       The third step is the trickiest step. This is a JSON document that must adhere to the JSON spec. By default, there’s already a default boilerplate JSON document there. Let’s analyze what it means:

Let’s use one a real one from the test-pastavideos.XYZweb.com page:

You’ll notice that it has content-length-range, which checks the max size, in this case being 1 GB, and it also redirects to the index.cfm page when successful.

After you copy and paste that JSON policy, press “Add Policy”. Notice how the fields in the section “Sample Form Based on the Above Fields” has been populated.

4.       The fields may look something like this:

5.       Now click on Generate HTML:

Note that with the HTML code above, whatever the user uploads will be renamed to “testfile.txt” on S3. To retain the user’s filename, you have to switch it to value=”${filename}”

6.       You then copy that generated HTML and paste it into your page. Add any necessary optional fields with the x-ignore-prefixes.

That should give you a basic template for uploading to S3.

NOTE: You cannot change the values of any <input> fields except except the key. More about this in the next section.

Assigning the a unique ID to the object

Keep in mind of these items when assigning a unique filename:

You cannot change the user’s filename in JavaScript – once the user selects a file from his computer, you cannot append a GUID because JavaScript will not let you set the value of a file textbox, you can only read.

You cannot change append the GUID prefix after the form is submitted, because the filename will go to S3’s server, and there’s no way to run conditional logic once it’s on S3.

To get around these limitations, you generate the GUID or rather a ColdFusion UUID, then append it to your filename. Let’s take a look at an example of the pasta video form:

First let’s show the JSON policy document:

{“expiration”: “2018-10-21T00:00:00Z”,

“conditions”: [

{“bucket”: “PastaVideos”},

[“starts-with”, “$key”, “step1_new_submissions/”],

{“acl”: “private”},

[“starts-with”, “$Content-Type”, “video/mov”],

[“content-length-range”, 0, 10737774],

{“success_action_redirect”: “http://pastavideos.XYZ.com/index.cfm&#8221;}

]

}

And now the HTML / ColdFusion code:

<cfset Data.videoUUID  = CreateUUID() />

<form id=”videoform” action=”http://www.XYZ.com/index.cfm&#8221; method=”get” enctype=”multipart/form-data”>

<!— Start of Amazon S3 specific variables. —>

<input type=”hidden” name=”key”

value=”#Data.key## Data.videoUUIDXX#_${filename}” />

<input type=”hidden” name=”acl” value=”#Data.acl#” />

In the code above, when the HTML is rendered, it will already have filename S3 will use when it’s finished uploading. Remember, this is not the value of the <input type=”file” /> box.  So the user uploads, it will look like this on the AWS Console:

Now why is the action set to http://www.XYZ.com/index.cfm and method set to get? The HTML has this set, but when the page is loaded, JavaScript immediately runs and changes action to http://s3.amazonaws.com/PastaVideos and method to post:

<cfoutput>

<!— Only change the form variables if JavaScript is turned on.  —>

$( “##videoform” ).attr( “action”, “#Variables.postURL#” );

$( “##videoform” ).attr( “method”, “post” );

</cfoutput>

This is so that if JavaScript is turned off, it doesn’t POST to S3.

Other Sources of Information

Amazon S3 POST Example in HTML

http://aws.amazon.com/code/Amazon%20S3/1093?_encoding=UTF8&jiveRedirect=1

AWS

http://aws.amazon.com/developertools/

Helpful Resources

http://wiki.smartfrog.org/wiki/display/sf/Amazon+S3

http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?AccessPolicyLanguage_UseCases_s3_a.html

http://docs.amazonwebservices.com/AmazonS3/latest/gsg/

http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?UsingHTTPPOST.html

http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?HTTPPOSTExamples.html

http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?Introduction.html#S3_ACLs

Test Form

http://s3.amazonaws.com/doc/s3-example-code/post/post_sample.html

Documentation Home Page

http://aws.amazon.com/documentation/s3/

Workflow using WS3 policies

http://docs.amazonwebservices.com/AmazonS3/latest/dev/

Helpful Posts

http://aws.amazon.com/articles/1434?_encoding=UTF8&jiveRedirect=1

https://forums.aws.amazon.com/message.jspa?messageID=89017

https://forums.aws.amazon.com/message.jspa?messageID=88188

Read OPML File in ColdFusion

Whipped out this little script to read an OPML file from Google Reader. Thought it may be handy.

<cfset GoogleOPMLFile = "C:/google-reader-subscriptions.xml" />

<cffile action="READ" variable="xml" file="#GoogleOPMLFile#" /> 

<cfset xmlDoc = XMLParse(xml) /> 

<cfset StartingDataNode = 2 />

<cfset Categories = ArrayLen( xmlDoc.opml.xmlChildren[2].XmlChildren ) />

<cfoutput>

<cfloop index="i" from="2" to="#Categories#">

  <strong>#xmlDoc.opml.xmlChildren[StartingDataNode].XmlChildren[i].XmlAttributes.Title#</strong>
  <ul>
  <cfloop index="j" from="1" to="#ArrayLen( xmlDoc.opml.xmlChildren[StartingDataNode].XmlChildren[i].XmlChildren )#">           
    <li>
      <a href="#xmlDoc.opml.xmlChildren[StartingDataNode].XmlChildren[i].XmlChildren[j].XmlAttributes.htmlURL#">
      #xmlDoc.opml.xmlChildren[StartingDataNode].XmlChildren[i].XmlChildren[j].XmlAttributes.title#</a>
    </li>      
  </cfloop>
  </ul>

</cfloop>

</cfoutput>

The code will display as follows:

High Performance Websites

This was a great read. Certainly learned a lot of useful lessons on front-end optimizations. Written by Steve Souders (previously lead Optimization efforts at Yahoo!), who also developed YSlow! – a Firefox add-on (actually an add-on to the Firefox add-on called Firebug) that gives your web pages a grade from A through F and tells you how to make it better.

If you have a build script, you may be able to automate a lot of these things, like combining JS and CSS files, and optimize PNG files (check out http://www.phpied.com/give-png-a-chance/ to see how to optimize from the command line). If you’re going to optimize JavaScript, I would recommend YUI Compressor (http://developer.yahoo.com/yui/compressor/) since it’s not as greedy as Google’s Closure Compiler for JavaScript. The Closure compiler (http://code.google.com/closure/compiler/) is relatively new and you may get even smaller files, but if your JavaScript is complex, it may have bugs because it’s a greedy compiler.

Anywhoot, here’s what I got from it:

  1. Reduce as many HTTP requests as possible.
  2. Minify JavaScript (don’t obfuscate, because it’s more trouble than it’s worth for the benefits you get)
  3. Minify CSS and optimize it (reduce duplication).
  4. Future-expire resources for caching (PNG, JPG, GIF, JavaScript and CSS).
  5. Minify HTML (get away from tables)
  6. Put CSS at the top, put JavaScript at the bottom.
  7. For design components (background images, button images, nav images), use CSS sprites.
  8. Use PNG for design components (they compress better than GIF, have partial transparency, and can have greater color palettes).
  9. Gzip non-binary data like HTML.
  10. Combine CSS and JavaScript into single files to reduce HTTP requests.

A summary of his optimization rules are found here, but of course, it’s not as detailed as the book: http://stevesouders.com/hpws/rules.php .

Stoyan Stefanov, another prominent developer who’s written tons on JavaScript and optimization, published 24 articles this month on optimization. I find these articles invaluable. It’s recent and he does actual optimization tests and tells you what tools he uses. Here’s the site: http://www.phpied.com/performance-advent-calendar-2009/

Order of Indexing Matters

In SQL Server, the order that you declare Indexes matters! So take the following example:

And we run this query:

SELECT  cast(ca.AreaID AS varchar(4)),
        RemovalDate
FROM    ContentAreas ca WITH ( NOLOCK )
WHERE   ContentID = 232232
        AND removerID = 34343

If you are to run the following query, since you are using ContentID as part of the WHERE clause, you want to put the ContentID’s index at the top of the indexes/key in SSMS. It speeds up retrieval.  Also put RemoverID below that – test with viewing the execution plan first to see if it makes a difference. Remember, the more indexes you add, the more it takes to update the index.

Add a Covered Index to tables to get rid of Bookmark Lookups

Add a Covered Index to tables to get rid of “bookmark lookups.” The idea of bookmarks: SQL Server uses Bookmarks when you use columns in your query that are not non-clustered/clustered indexes. So if we have something like this:

SELECT  AreaID,
        ContentID,
        ApprovalCodeID,
        isSubmissionPoint

FROM    ContentAreas

WHERE   AreaID = 31
        AND ContentID > 30000
        AND ModifierID > 65000

And let’s say that AreaID is a clustered index and ContentID is a non-clustered index. If ModifierID is not an index of any kind, it would function as a “bookmark lookup” in this query. To not use a “bookmark lookup,” use a Covered Index, which clumping the Modifier column with ContentID and turning both into a Covered Index (a kind of composite index).

Securing a Flex App in Flash Media Server 2.0

I’ve found some limitations in securing a Flex app that uses the outdated Flash Media Server 2.0. The best way would appear to use the Access DLL Module that is part of FMS 2.0 on the server side, but I literally could not find any mention of it in any Adobe documentation. In any case, the ways to secure it would be as follows:

  1. Access DLL Module
    This module is mentioned in the docs, but does not exist anywhere. It does not come right out of the box for FMS, but seems something that one has to modify and tweak via Visual C++.

    “The module can be configured to initiate a query of the organization’s database of users and passwords to determine if a connection should be allowed, and if it is allowed, the connection is accepted and the database updated with a record of the user’s access to Flash Media Server.”-
    http://livedocs.adobe.com/fms/2/docs/wwhelp/wwhimpl/common/html/wwhelp.htm?context=LiveDocs_Parts&file=00000513.html

    More on this mythical DLL: http://blog.lib.umn.edu/mcfa0086/discretecosine/055223.html

  2. Confirm location of client SWF
    This is what I would recommend, for an additional layer of security. This will prevent one from creating their own SWF and putting it on their own server to record and stream from our server. With this change in the main.asc, FMS will only record and stream if the SWF is located on your FMS server.

    Nonetheless, a user could still pass custom flashvars into the SWF on your servers, however, them passing their own flashvars into their own SWF using our FMS is more of a threat.

  3. Server-Side Validation:
    Pass encrypted username/password to main.asc and main.asc authenticates for valid, then connects.

Other Notes

Using other secure development practices (page 232 of flashmediaserver_managing.pdF):

“You might not want to use SSL in all your applications because of the additional processing time required to encrypt data over a secure connection. You can use other effective strategies to help protect all your media applications, regardless of what protocol is used for connections.”

When you deploy a Flash Media Server –
http://livedocs.adobe.com/fms/2/docs/wwhelp/wwhimpl/common/html/wwhelp.htm?context=LiveDocs_Parts&file=00000117.html – “use a server-side script to verify that connecting SWF files are coming from the location you expect (and not from an unknown computer). You can do this by checking the client.referrer property of the client object before the server accepts the connection. For more information about writing server-side scripts, see Developing Media Applications.”

Use server-side script precautions
http://livedocs.adobe.com/fms/2/docs/wwhelp/wwhimpl/common/html/wwhelp.htm?context=LiveDocs_Parts&file=00000518.html

In server-side scripts do not use procedures that can be called by a malicious application, which could then fill a hard disk, consume the processor, or do other damage. Procedures attached to client objects are particularly vulnerable. Procedures to be aware of include writing to the hard disk without checking the quantity of data being written, procedures that can be infinitely looped, and so on.

Script Out Table Data in SQL Server 2008

SQL Server 2008 has a new feature where you can script out data in tables via INSERT statements. Just do the following:



1. Right click on the database and select “Generate Scripts…”

2. Select the database.

3. This is the hidden part. Set “Script Data” to true.

4. Select Tables.

5. Select Tables.

6. You can script out a table script per file or as a single file. I prefer ANSI text.

7. Hit Finish.

VMware Virtual Machine for Windows 64-bit

Just tried to make a virtual machine using VMware workstation for a Windows 64-bit machine and I got this message:

VMware needs to have an enabled VT (Virtualization Technology) CPU to run a Windows 2008 64bit guest. I just ran the Intel Utility to check if this machine was enabled, and it isn’t. I get the VMware error while trying to install and can’t get further ahead. Checked out the VM documentation and there isn’t any setting to emulate VT. Just tried VirtualBox, and it’s the same deal. To make sure your PC is capable, you can try VMware’s Process Tool.

Show Time/Date/Weather in Perl

This little console program scrapes from http://www.timeanddate.com/ and prints to the screen. A handy little program I often use to tell the time in other cities.

sub getCityTime {  
  use LWP::Simple;
  my %hash;

  #new york http://www.timeanddate.com/worldclock/city.html?n=179
  #darwin   http://www.timeanddate.com/worldclock/city.html?n=72
  #tokyo    http://www.timeanddate.com/worldclock/city.html?n=248
  #peru     http://www.timeanddate.com/worldclock/city.html?n=131
  #         http://www.timeanddate.com/worldclock/results.html?query=darwin

  
  $url = "http://www.timeanddate.com/worldclock/city.html?n=";
  
  %hash = ( 'newyork', '179', 
            'darwin',   '72', 
            'tokyo',    '248', 
            'lima',     '131'
          );
  
  my $html = get($url.$hash{$_[0]})
    or die "Couldn't connect to the timeanddate.com website.";
    
  $html =~ m!<span>([\s\w,;']*)</span>!;
  print "Location ======&gt; ".$1."\n";  

  $html =~ m!<strong>([\s\w:,]*)</strong>!;
  print "Date/Time =====&gt; ".$1."\n";  

  $html =~ m!<strong>(\d{1,3})[\s\w:,;&amp;]*</strong>!;
  print "Temperature ===&gt; ".$1.chr(167)."F\n";  

}

# MAIN

if ($ARGV[0] ne "") {
  getCityTime( $ARGV[0] );
}
else {

  print "\nDisplays the time from TimeAndDate.com, based on city.\n\n";
  
  getCityTime( "newyork" );
  print "==============================\n";
  getCityTime( "lima" );
  print "==============================\n";
  getCityTime( "tokyo" );
  print "==============================\n";
  getCityTime( "darwin" );  

}

Generate DateTime based on Time Offset

So let’s say you have a table with a column of type DateTime. Now you have to support timezones. The first thing you create is a table of time offsets (Google Time Offsets). The table would look like this:

Here’s schema for it. You can download the full script with the data as well.

CREATE TABLE [dbo].[TimeZones](
  [TimeZoneID] [int] IDENTITY(1,1) NOT NULL,
  [Offset] [varchar](15) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
  [TimeLabel] [varchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
 CONSTRAINT [PK_TimeZones] PRIMARY KEY CLUSTERED 
(
  [TimeZoneID] ASC
)WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO

Now for the purpose of this tutorial, let’s create an event table. This will have a set of dates/times which we’ll use to add the timeoffsets to. The table looks like this:

and here’s the script:

CREATE TABLE [dbo].[TimeEvent](
  [TimeEventID] [int] IDENTITY(1,1) NOT NULL,
  [EventName] [varchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
  [EventDateTime] [datetime] NULL,
  [TimeZoneID] [int] NULL,
 CONSTRAINT [PK_TimeEvent] PRIMARY KEY CLUSTERED 
(
  [TimeEventID] ASC
)WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]

Now let’s INNER JOIN the tables properly adding the timeoffset to the event datetime:

SELECT  tz.TimeZoneID,
        tz.Offset,        
        te.TimeEventID,
        te.EventName,
        te.EventDateTime AS 'OriginalEventDateTime',

        -- Hour - Extract Hour from Offset
        SUBSTRING(Offset, 0, CHARINDEX(':', Offset, 0)) AS 'Hour',
        
        -- Minute - Extract Minute from Offset        
        SUBSTRING(Offset, CHARINDEX(':', Offset, 0) + 1, LEN(Offset)) AS 'Minute',
        
        -- New Date with UTC Addition
        DATEADD(HOUR,
                CAST(SUBSTRING(Offset, 0, CHARINDEX(':', Offset, 0)) AS INT),
                DATEADD(MINUTE,
                        CAST(SUBSTRING(Offset, CHARINDEX(':', Offset, 0) + 1,
                                       LEN(Offset)) AS INT), EventDateTime -- DateTime change via Offset
                         )) AS 'EventDateTimeWithTimeZone'
FROM    TimeZones tz
        INNER JOIN TimeEvent te ON tz.TimeZoneID = te.TimeZoneID

Here’s the result:

World’s Smallest Program

Ouch, I guess the dream I had for having the world’s smallest program has been crushed. I made a program a long long time ago in Assembly using Windows 95’s DEBUG.EXE that rebooted the computer. The reason why it’s so small is that it reboots the computer, so once the BIOS Interrupt executes, there’s no need to do any other clean-up or maintenance. Unfortunately after Windows 95, Microsoft got smart and disabled DOS from executing these interrupts from DEBUG.EXE. Here was the code:

N REBOOT!.com
A 100
INT 19

R CX
2
W
Q

But what I really want is to see the code for this 0-byte code program written in C.

Custom Fonts with CSS3

One great thing about CSS3 is its typography facilities. With that comes the ability to use system fonts other than the safe-web fonts such as Arial/Helvetica, Times New Roman/Times, or Courier New/Courier. Actually, it’s been a feature since CSS2, but at that time, but not all browsers have been supporting the same type of font file. ( Man, why can’t browsers agree on *SOMETHING*.) You can read more about the font files that are supported (EOT, OTF, TTF), but for the most part, as always IE is the buzz-kill. Currently, most recent-versioned browsers support the TTF format except IE8.

Anywhoot, here’s a demo that loads a TTF Upside-down font and applies the font to a text area. It lets you type upside-down. Code is very simple:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Custom Text</title>
<style type="text/css">
@font-face 
{
  font-family: "upsidedown_font"; 
  src: url("http://www.shinylight.com/wp-content/uploads/2010/01/Stagdive.ttf");
}

textarea 
{
  font-family: "upsidedown_font", Arial; 
  font-size:48px;
}
</style>

</head>

<body>
<h3>Type stuff. It'll be upside down.</h3>
<p>Only supported in modern-versioned browsers. Not supported in IE8. </p>
<textarea cols="60" rows="10"></textarea>
</body>

Logical vs Physical ER Diagrams

Logical diagrams are to convey requirements only. Physical diagrams represent the actual data structure to support the requirements and take into account technical scalability and speed.

Edit: I hate the way I had to format this document for this blog post. If you want this tutorial better formatted, check out the Word document.

One-to-Many Relationship


Logical

On ER/Studio, two tables are created. The Store table has a primary key StoreID. The Manager table has a primary key, ManagerID and a foreign key, StoredID (which is mapped to StoreID from the Store table).

Physical

On SQL Server, two tables are created. The Store table has a primary key StoreID. The Manager table has a primary key, ManagerID and a foreign key, StoredID (which is mapped to StoreID from the Store table).

Manager Table:

If you allow NULLs for StoreID in the Manager table, then you’ll be able to have a Manager without a store. If you don’t allow NULLs (leave it unchecked), then you’ll have to have at least one Store assigned to a Manager.

Querying

The above states that one store can have many managers. Here’s some sample data what’s in the tables:

<div align="center"><br />
  <img src="http://www.shinylight.com/wp-content/uploads/2010/01/image006.jpg" width="240" height="171" />
  </p>
</div>
<p>
<div align="center"><br />
  <img src="http://www.shinylight.com/wp-content/uploads/2010/01/image007.jpg" width="414" height="77" />
  </p>
</div>
<p>Get all manager information with for all managers that  belong to a store:</p>
<table width="100%" border="0" cellspacing="5" cellpadding="3">
  <tr>
    <td>SELECT  Manager.ManagerID,  Manager.FirstName,  Manager.LastName,  Manager.StoreID,
        Store.[Name],  Store.Address,  Store.State,  Store.City,  Store.Zip
FROM    Manager
        INNER JOIN Store ON Manager.StoreID = Store.StoreID


Get all manager information with for all managers (even if they don’t have a store):

SELECT  Manager.ManagerID,  Manager.FirstName,  Manager.LastName,  Manager.StoreID,
        Store.[Name],  Store.Address,  Store.State,  Store.City,  Store.Zip
FROM    Manager
        LEFT OUTER JOIN Store ON Manager.StoreID = Store.StoreID


Notice the NULL for Steamboat Willie. He doesn’t have a store, so all Store related fields show as NULL.


Many-to-Many Relationship



In order to implement this physically, you need a join table. In this case, we use StoreManager. Logically, you only need only two entities (Store and Manager).

Logical

Physical


On SQL Server, three tables are created. The Store table has a primary key StoreID. The Manager table has a primary key, ManagerID. The table StoreManager has two foreign keys:  StoredID (which is mapped to StoreID from the Store table) and ManagerID (which is mapped to the ManagerID from the Manager table).

StoreManager Table:

If you allow NULLs for StoreID and ManagerID in the StoreManager table, then you’ll be able to have a Manager without a store. If you don’t allow NULLs (leave it unchecked for both), then you’ll have to have at least one Store assigned to a Manager.

Here’s some sample data what’s in the tables:

<p align="center"><img src="http://www.shinylight.com/wp-content/uploads/2010/01/image016.jpg" width="157" height="140" /></p>
<!--start_raw--><br /><!--end_raw-->

<p align="center"><img src="http://www.shinylight.com/wp-content/uploads/2010/01/image017.jpg" width="331" height="65" /></p>
<!--start_raw--><br /><!--end_raw-->

<p align="center"><img src="http://www.shinylight.com/wp-content/uploads/2010/01/image018.jpg" width="73" height="204" /><br />
</p>
<!--start_raw--><br /><!--end_raw-->
<p>Get all manager information associated with his store:</p>
SELECT  Manager.ManagerID,  Manager.FirstName,  Manager.LastName,
        StoreManager.StoreID, StoreManager.ManagerID, Store.StoreID,
        Store.[Name],Store.Address, Store.State, Store.City, Store.Zip
FROM    Store
        INNER JOIN StoreManager ON Store.StoreID = StoreManager.StoreID
        INNER JOIN Manager ON StoreManager.ManagerID = Manager.ManagerID

SQLCompare via Command Console

If you haven’t heard of SQLCompare by RedGate, you’re missing out. It’s an amazing product. In summary, you can do the following (which I use it for 99% of the time):

  • Compare Schema / TSQL code from Different Databases
  • Sync Schema / TSQL code across two databases
  • Generate TSQL code from your comparison so you can use it for deployments
  • Generate reports on SQL Changes

Most of those things can be done via the GUI version of the tool. The product also comes with a command line version, which you can use it in your build script.

To script out your entire database via the command console, you can do the following:

SQLCompare /force /database1:YOURDATABASENAME /username1:sa /password1:password /server1:YOURDATABASESERVERNAME /makescripts:c:\x

To compare two databases and generate a report via the command console, you can do the following. The reason there’s so many switches is because you need to enter the database name and credentials for the two databases.

SQLCompare /force /database1:DB1NAME /username1:sa /password1:password /server1:SERVER1NAME /database2:DB2NAME /username2:sa /password2:password /server2:SERVER2NAME /report:c:\report.html /reporttype:Interactive