Tuesday, October 05, 2010

Windows 7, Wired and Wireless networks

I've been on Windows 7 (64-bit) for a few months now, and just recently moved to a new Lenovo T510. The move, using Windows backup and restore, was downright painless and just worked.

One issue I had with the new setup that was frustrating me was that, when docked, the laptop would keep a strong preference for wireless connections over wired ones. Apparently, Windows 7 defaults to using some customized route preference calculation metric, and had my wireless at 90 and my wired connection at 900-ish. (Lower scores indicate stronger affinity.)

You can see your current, auto-assigned routing metrics by typing netstat -r from a command prompt, and looking in the 'Metric' column of hte IPv4 Route Table.

When I'm docked, I want wired to take precedent, as the wired connection has a static IP, and makes certain dev and admin tasks possible that cant' be done around here on a regularly-shifting DHCP address on wireless - things like sending SMTP mails without credentials, the way servers get to.

So, using an articles from Lifehacker and Palehorse as a starting point, I set forth to change my interface metrics to suit my needs.

  1. Starting with Control Panel\Network and Internet\Network and Sharing Center, click the wired connection under Connections:

  2. Then click 'Properties'.
  3. Then select 'Internet Protocol Version 4 (TCP/IPv4)' and click 'Properties'.
  4. Click 'Advanced' at the bottom of the window.
  5. Un-check the 'Automatic Metric' box, and enter a new routing metric here. Lower numbers are higher priority. I used a routing metric of 10 for my wired connection and 200 for my wireless, and everything seems to be behaving exactly as I want.
  6. Repeat from #1 selecting your wireless network connection.
  7. Reboot when done to have the new assignments take effect.
As above, you can verify your assignments by typing netstat -r from a command prompt, and looking in the 'Metric' column of hte IPv4 Route Table.

A word of warning: you may want to check with your networking people before you do this. Routers can assign this metric, and overriding what they assign may have unintended consequences for how you access your networks.

Bill

Thursday, May 13, 2010

Daylight Savings Time in SQL Server

We're running a vendor system that stores every date/time object in GMT. This isn't a bad thing, but it does require converting GMT -> Local time in every place we used it: select statements, where clauses, corelated subqueries, etc. To do this we started out years ago with a simple Function plus a database table to store DST dates.

That function looked like this:

CREATE FUNCTION [dbo].[ConvertGMT]
(@MYDATE DATETIME)
RETURNS DATETIME AS
BEGIN
DECLARE @OFFSET INT
DECLARE @YEAR INT
DECLARE @DST_BEGIN DATETIME
DECLARE @DST_END DATETIME
DECLARE @RETURNED_DATE DATETIME

SET @YEAR = YEAR(@MYDATE)
SET @DST_BEGIN = (select DST_BEGIN from dbo.DST_LOOKUP where YEAR_LOOKUP = @YEAR)
SET @DST_END = (select DST_END from dbo.DST_LOOKUP where YEAR_LOOKUP = @YEAR)
IF (@MYDATE BETWEEN @DST_BEGIN AND @DST_END)
SET @OFFSET = -4
ELSE
SET @OFFSET = -5

SET @RETURNED_DATE = DATEADD(hour,@OFFSET,@MYDATE)
RETURN @RETURNED_DATE
END

and the associated table looked like this:

DST_BEGIN DST_END YEAR_LOOKUP
4/4/1999 2:00:00 AM 10/31/1999 3:00:00 AM 1999
4/2/2000 2:00:00 AM 10/29/2000 3:00:00 AM 2000
4/1/2001 2:00:00 AM 10/28/2001 3:00:00 AM 2001
4/7/2002 2:00:00 AM 10/27/2002 3:00:00 AM 2002
4/6/2003 2:00:00 AM 10/26/2003 3:00:00 AM 2003
4/4/2004 2:00:00 AM 10/31/2004 3:00:00 AM 2004
4/3/2005 2:00:00 AM 10/30/2005 3:00:00 AM 2005
4/2/2006 2:00:00 AM 10/29/2006 3:00:00 AM 2006
3/11/2007 2:00:00 AM 11/4/2007 3:00:00 AM 2007
3/9/2008 2:00:00 AM 11/2/2008 3:00:00 AM 2008
3/8/2009 2:00:00 AM 11/1/2009 3:00:00 AM 2009
3/14/2010 2:00:00 AM 11/7/2010 3:00:00 AM 2010
3/13/2011 2:00:00 AM 11/6/2011 3:00:00 AM 2011
This worked well enough for a while, but it cost a couple of table hits for every conversion. That adds up when you're doing 6 conversion per row returning a couple hundred rows a few times a minute. This benchmarked at ≅15seconds to return 45,000 at 3 conversions / row in a simple select.

Our next step was to eliminate the table lookup. That was simple and inelegant, and looked like this:

ALTER FUNCTION [dbo].[ConvertGMT] (@MYDATE DATETIME)
RETURNS DATETIME AS
BEGIN
DECLARE @OFFSET INT
DECLARE @YEAR INT
DECLARE @RETURNED_DATE DATETIME

SET @YEAR = YEAR(@MYDATE)

IF @YEAR = 2010 AND @MYDATE between 'Mar 14 2010 2:00AM ' and 'Nov 7 2010 3:00AM ' SET @OFFSET = -4
ELSE IF @YEAR = 2009 AND @MYDATE between 'Mar 8 2009 2:00AM ' and 'Nov 1 2009 3:00AM ' SET @OFFSET = -4
ELSE IF @YEAR = 2008 AND @MYDATE between 'Mar 9 2008 2:00AM ' and 'Nov 2 2008 3:00AM ' SET @OFFSET = -4
ELSE IF @YEAR = 2007 AND @MYDATE between 'Mar 11 2007 2:00AM ' and 'Nov 4 2007 3:00AM ' SET @OFFSET = -4
ELSE IF @YEAR = 2006 AND @MYDATE between 'Apr 2 2006 2:00AM ' and 'Oct 29 2006 3:00AM ' SET @OFFSET = -4
ELSE IF @YEAR = 2005 AND @MYDATE between 'Apr 3 2005 2:00AM ' and 'Oct 30 2005 3:00AM ' SET @OFFSET = -4
ELSE IF @YEAR = 2004 AND @MYDATE between 'Apr 4 2004 2:00AM ' and 'Oct 31 2004 3:00AM ' SET @OFFSET = -4
ELSE IF @YEAR = 2003 AND @MYDATE between 'Apr 6 2003 2:00AM ' and 'Oct 26 2003 3:00AM ' SET @OFFSET = -4
ELSE IF @YEAR = 2002 AND @MYDATE between 'Apr 7 2002 2:00AM ' and 'Oct 27 2002 3:00AM ' SET @OFFSET = -4
ELSE IF @YEAR = 2001 AND @MYDATE between 'Apr 1 2001 2:00AM ' and 'Oct 28 2001 3:00AM ' SET @OFFSET = -4
ELSE IF @YEAR = 2000 AND @MYDATE between 'Apr 2 2000 2:00AM ' and 'Oct 29 2000 3:00AM ' SET @OFFSET = -4
ELSE IF @YEAR = 1999 AND @MYDATE between 'Apr 4 1999 2:00AM ' and 'Oct 31 1999 3:00AM ' SET @OFFSET = -4

ELSE IF @YEAR = 2011 AND @MYDATE between 'Mar 13 2011 2:00AM ' and 'Nov 6 2011 3:00AM ' SET @OFFSET = -4
ELSE IF @YEAR = 2012 AND @MYDATE between 'Mar 11 2012 2:00AM ' and 'Nov 4 2012 3:00AM ' SET @OFFSET = -4
ELSE IF @YEAR = 2013 AND @MYDATE between 'Mar 10 2013 2:00AM ' and 'Nov 3 2013 3:00AM ' SET @OFFSET = -4
ELSE IF @YEAR = 2014 AND @MYDATE between 'Mar 9 2014 2:00AM ' and 'Nov 2 2014 3:00AM ' SET @OFFSET = -4

ELSE SET @OFFSET = -5

SET @RETURNED_DATE = DATEADD(hour,@OFFSET,@MYDATE)
RETURN @RETURNED_DATE
END
The performance boost with this version was amazing. It processed the same 45,000 x 3 conversion / row query in ≅3 seconds. Across all our GUI and report calls, that was a noticeable performance boost. But, it looked like something someone in CS101 would write, and it still required occasional updates to add new years and keep current.

The final step - and current solution - started with a DST discussion at MSSqlTips that used some pre-calculated offsets to dynamically calculate DST start and stop dates based on the DOW of the start and end months. This worked well for dates 2007 and later, but earlier dates had a different date pattern. I continued the same model that Tim Cullen started with the MSSqlTips post and used a set of precalculated offsets for 2006 and earlier starts, and dynamically calculated DST end dates.

The final function looks like this:

CREATE FUNCTION [dbo].[ConvertGMT]
(@MYDATE DATETIME)
RETURNS DATETIME AS
BEGIN
DECLARE @OFFSET INT
DECLARE @YEAR INT
DECLARE @DST_BEGIN DATETIME
DECLARE @DST_END DATETIME
DECLARE @RETURNED_DATE DATETIME

SET @YEAR = YEAR(@MYDATE)

declare @DSTStartWeek smalldatetime, @DSTEndWeek smalldatetime

if @YEAR >= 2007
BEGIN
set @DSTStartWeek = '03/01/' + convert(varchar,@YEAR)
SET @DST_BEGIN = case datepart(dw,@DSTStartWeek)
when 1 then
dateadd(hour,170,@DSTStartWeek)
when 2 then
dateadd(hour,314,@DSTStartWeek)
when 3 then
dateadd(hour,290,@DSTStartWeek)
when 4 then
dateadd(hour,266,@DSTStartWeek)
when 5 then
dateadd(hour,242,@DSTStartWeek)
when 6 then
dateadd(hour,218,@DSTStartWeek)
when 7 then
dateadd(hour,194,@DSTStartWeek)
end

set @DSTEndWeek = '11/01/' + convert(varchar,@Year)
SET @DST_END = case datepart(dw,dateadd(week,1,@DSTEndWeek))
when 1 then
dateadd(hour,2,@DSTEndWeek)
when 2 then
dateadd(hour,146,@DSTEndWeek)
when 3 then
dateadd(hour,122,@DSTEndWeek)
when 4 then
dateadd(hour,98,@DSTEndWeek)
when 5 then
dateadd(hour,74,@DSTEndWeek)
when 6 then
dateadd(hour,50,@DSTEndWeek)
when 7 then
dateadd(hour,26,@DSTEndWeek)
end
END
ELSE
BEGIN
set @DSTStartWeek = '04/01/' + convert(varchar,@YEAR)
SET @DST_BEGIN = case datepart(dw,@DSTStartWeek)
when 1 then
dateadd(hour,2,@DSTStartWeek)
when 2 then
dateadd(hour,146,@DSTStartWeek)
when 3 then
dateadd(hour,122,@DSTStartWeek)
when 4 then
dateadd(hour,98,@DSTStartWeek)
when 5 then
dateadd(hour,74,@DSTStartWeek)
when 6 then
dateadd(hour,50,@DSTStartWeek)
when 7 then
dateadd(hour,26,@DSTStartWeek)
end
SET @DSTEndWeek = '11/01/' + convert(varchar,@Year)
SET @DST_END = dateadd(hh,3,dateadd (dd,(1 - datepart(dw,@DSTEndWeek)),@DSTEndWeek))

END

IF @MYDATE between @DST_BEGIN and @DST_END
SET @OFFSET = -4
ELSE
SET @OFFSET = -5

SET @RETURNED_DATE = DATEADD(hour,@OFFSET,@MYDATE)
RETURN @RETURNED_DATE
END
This code has one change from Tim's code - it now ends at 3am in November, for 2007 and later. This is what we have in production now. It's just as performant as the previous version (≅3 sec for the benchmark query), but it has the advantage of never needing updating for new years. It's more elegant than it's predecessors, but I have no doubt that it could be done better.

Use, share, comment.

-Bill


(Thanks to the Wikipedia DST Article for clarity on this.)

Friday, June 26, 2009

Apache, SSL and Tomcat Clustering

With help from the Apache HTTPd docs, the Apache Tomcat docs, and lots of Google-help, I've set up a failover-friendly load balanced server pair that seriously ups our game in terms of high availibility, and boosted performance as a by-product.

Background (skip this if you just want to get to the server setup details)
My group does custom web applications for use the Operating Rooms. We interface with multiple clinical and administrative systems and show, collect, and share data that's critical to the operations and to patient safety. Our longest running application, ORview, is a Java web app that collects pre-operative assessments and post-operative assessments, provides airport monitor-style big screen views throughout the OR area, and provides other billing, reporting and QA/QI functionality. Our newest application is RequestOR, which collects posting requests for new case postings, and shares, via HL7, this data with GE's Centricity Periopertive Manager and IDX. Our old ORview prod server was at end-of-life, and we were needing a new server to deploy RequestOR on so we designed, spec'd and installed a new server cluster that'll meet the needs of both of our major applications, and give us some room to migrate some of our more minor applications (Java and JRuby).

Server Hardware
Qty 2 - IBM HS21 Blade Server, Dual Xenon Quad-core @ 3.0GHz, 8GB PC2-5300 RAM, 146G SAS HDD.

Server Software
Red Hat Enterprise Linux Server release 5.3 (Tikanga)
Java SE Runtime Environment 64-bit (build 1.6.0_13-b03)
Apache HTTPd 2.2.3 (httpd-2.2.3-22.el5_3.1) with mod_ssl (mod_ssl-2.2.3-22.el5_3.1) and mod_proxy_ajp (as part of the httpd install)
Apache Tomcat 6.0.18

Network/DNS Configuration
The key to doing multiple SSL-secured applications is that each unique SSL certificate NEEDS it's own IP address and host name to bind to. (There are some budding ways around that but none of those were mature enough going into this process to be a viable production option.)

requestor: 10.20.215.226, 10.20.215.228 - configured as dns round-robin
orview2: 10.20.215.227, 10.20.215.229 – configured as dns round-robin

Those IPs get distrubuted to each server that's hosting that application, along with each server having their own ip.

server 1
10.20.215.223 – blade1
10.20.215.226 – requestor
10.20.215.227 – orview2
228.0.0.23 - multicast

server 2
10.20.215.224 – blade2
10.20.215.228 – requestor (dns rr)
10.20.215.229 – orview2 (dns rr)
228.0.0.23 - multicast

Apache HTTPd Configuration
First, the explanation. There's a default port 80 host (in black) that just answers requests on the machine's unique IP. We use this index.html to point to a simple machine ident. The RequestOR section is next (in blue). It defines a port 80 host that redirects all requests to the port 443 (ssl) version of itself. The second virtual host in the blue is the ssl version. This section contains config info for the SSL certificates, and a reference to balancer://ajpCluster/requestor as handler for all requests to this virtualhost. The ORview2 section (in red) largely duplicates this configuration for the second ssl application on these servers. The second server's config files are identical, except the IPs are changed to match that server's configuration.

<VirtualHost *:80>
DocumentRoot /var/www/html
ServerName blade1.foo.edu
</VirtualHost>
# RequestOR virtual hosts
<VirtualHost 10.20.215.226:80>
DocumentRoot /var/www/html/requestor
ServerName requestor.foo.edu
Redirect permanent / https://requestor.foo.edu/
</VirtualHost>
<VirtualHost 10.20.215.226:443>
DocumentRoot /var/www/html/requestor
ServerName requestor.foo.edu
SSLCertificateFile
/etc/pki/tls/certs/requestor.foo.edu.crt
SSLCertificateKeyFile
/etc/pki/tls/private/requestor.foo.edu.key
SSLEngine on
SSLProtocol all -SSLv2
<Location />
ProxyPass balancer://ajpCluster/requestor
stickysession=JSESSIONID
</Location>
# ErrorLog logs/dummy-host.example.com-error_log
# CustomLog logs/dummy-host.example.com-access_log common
</VirtualHost>

#ORview2 virtual hosts
<VirtualHost 10.20.215.227:80>
DocumentRoot /var/www/html
ServerName orview2.foo.edu
Redirect permanent / https://orview2.foo.edu/
</VirtualHost>
<VirtualHost 10.20.215.227:443>
DocumentRoot /var/www/html
ServerName orview2.foo.edu
SSLCertificateFile /etc/pki/tls/certs/orview2.foo.edu.crt
SSLCertificateKeyFile
/etc/pki/tls/private/orview2.foo.edu.key
SSLEngine on
SSLProtocol all -SSLv2
<Location />
ProxyPass balancer://ajpCluster/orstat stickysession=JSESSIONID
</Location>
</VirtualHost>


Apache mod_proxy_ajp Configuration
This is the config for the ajp load balancer. Tomcat on each server is set to listen for AJP requests on port 8009. This config file (the same on each server) tells the AJP balancer about the cluseter composed of TomcatA and TomcatB. In the absence of any other details, it'll default to sending new requests to the least loaded Tomcat server, and sending requests from existing sessions to the server that's been handling them. That's what the stickysessions attribute makes happen. Proxy listeners are configured for each application, same color scheme as above.

LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
#
# When loaded, the mod_proxy_ajp module adds support for
# proxying to an AJP/1.3 backend server (such as Tomcat).
# To proxy to an AJP backend, use the "ajp://" URI scheme;
# Tomcat is configured to listen on port 8009 for AJP requests
# by default.
#
<Location /balancer-manager>
SetHandler balancer-manager
</Location>
<Proxy balancer://ajpCluster>
BalancerMember ajp://blade1.foo.edu:8009 route=tomcatA
BalancerMember ajp://blade2.foo.edu:8009 route=tomcatB
</Proxy>
<Location /requestor>
ProxyPass balancer://ajpCluster/requestor stickysession=JSESSIONID
</Location>

<Location /orstat>
ProxyPass balancer://ajpCluster/orstat stickysession=JSESSIONID
</Location>



Tomcat's server.xml
The server.xml is generally a good sized file where most of the defaults are just fine. I've excerpted the relevant bits here that I had to change to get clustering working.

A connector is defined in the <server> portion of the file. It should be enabled by default.

<!-- Define an AJP 1.3 Connector on port 8009 -->
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />


There should be one <engine> element. Edit it to include the unique name used for this in the proxy-ajp config file - tomcatA in this case. The other server's server.xml looks the same except for tomcatB here.

<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatA">


A cluster element is defined inside the <engine> element. The important thing to set here is the multicast address (in red) to be used by Tribes to synchronize session information across the servers in the cluster. The FarmWarDeployer (in blue) is experimental and (as of when this was written) isn't ready for prime-time.

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="6">

<Manager className="org.apache.catalina.ha.session.BackupManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"
mapSendOptions="6"/>
<!--
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
-->
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.23"
port="45564"
frequency="500"
dropTime="3000"/>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="auto"
port="5000"
selectorTimeout="100"
maxThreads="6"/>

<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
</Channel>

<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>

<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>


<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>


Conclusion
It's completely possible to get high-availibility load balancing and clustering working for Apache's HTTPd and Tomcat under Linux. The performance and fault-tolerant benefits are completely worth it. and thanks to a lot of work done by a lot of dedicated people, it's pretty easy to get set up and running. I send my thanks to every site I Googled figuring out how to get this working - there are too many to count. If you have questions or corrections, please post them here, and I'll do what I can to help figure them out.

Monday, July 16, 2007

Unix File Permissions

A useful tidbit that I just discovered: if you want to set unix file permissions for the group to match the owner it's as simple as:

chmod g=u filename

This just saved me boatloads of time.

Discovered at: comp.unix.questions

-Bill

Thursday, May 17, 2007

Harnessing Capistrano

Harnessing Capistrano - RailsConf2007 Tutorial
Jamis Buck

Slide aand demos available here.

Started out as a Deployment tool

Can also use it for:
+ monitoring tool across all servers (ps, df, uname, etc.)
+ server maintenance (mounts, symlinks, ...)
+ troubleshooting

A basic config file and a demo.

A config using a gateway and a demo.

A cool description and demo of 'cap -e shell' which creates and caches a connection to each deployment server. Can be scoped by role or host.

Cap 2.0 adds namespaces for tasks. Sort of a way of grouping like tasks. (eg. cap tail:showuploadlog groups the showuploadlog task into the tail namespace.

Can do variables.

And transactions, so you can make sure tasks complete on all servers or it rolls them all back - no inbetween. No good way to recover if a rollback fails.

All sort of options for including other cap files. This was always optional in Cap 1.x, but there is no default config in Cap2.0.

You can also script-check dependencies (cap deploy:check). That looks damn useful.

scrpit/spin tell cap how to start your application Many times i's script/process/spawner

cap deploy:cold <- first time deploy

cap deploy:rollback
cap deploy:migrations
cap deploy:web:disable
cap deploy:web:enable
cap deploy:cleanup <- clean up all but the last 5 releases

In addion to standard a version control-based checkout, you can do other types: export, copy, remote cache, or you can roll your own. (Set by :deploy_via option.)

There are all sorts of nifty options to get Cap to work with all sorts of version control systems.

Lots of helpers: run, sudo, stream, connect, find_task, etc.

In the ways of advanced usage, there are:
+ before_ and after_ events (before_deploy: run_tests)
+ custom callbacks: (email notifications, etc.) with complex rules
+ staging environments: you can script deploys to behave differently based on target

JRuby on Rails Tutorial

JRuby on Rails - RailsConf 2007 Tutorial

Advantages:
Take advantages of existing Java libraries
Run on Java infrastructure
Supports Java's multibyte/unicode strings.
Support Ruby's thread API, one native thread = one system thread, good multicore use.
True concurrent threading.

Great progress in the past year. From barely running rails 1.1 last year to running all of rails (except some lowlow demand dark corners) now, with good performance on everything but RDoc.

Most Gems just work. Anything pure Ruby (or with a JRuby port) runs.
Webrick works.
JRuby port of Mongrel works.

Differences:
1-Database support
Pure Ruby drivers work - mySQL
All the JDBC drivers you'd want to use work. (Yay!) (Some need custom coding to support migrations.
JDNI for connection pooling (More yay!)

2-No Native Extensions
Unless there's a port.
Mongrel - done
Hpricot - done
Database support - some done, some in progress
RMagick - in progress

3-Command-line Performance
Very good (possibly faster) once you're running, but typical java-slow startup performance.

Deployment
1. Mongrel works well. No process forking, process management
But why? Use Java App Servers via GoldSpike (or rails-integration) plugin.
2. Build WAR files for Rails apps.
One plugin, pure Ruby, out comes a deployable WAR file.
3. Glassfish server gem. (Sort of a "Pack" in the box implementation.)
Not yet. But soon.

Migrating existing Rails apps to JRuby/Rails
Be aware of the currently unsupported features.
1. Database support
- MySQL is great
- Derby & HSQL work well. Small embedable DB.
- Postgres - Few out of 1000+ tests.
- Oracle - starting to get attention
- MS SQL Server & DB - Need help, haven't really been worked on much.
Migrations mostly work well, tricky on some DBs that don't have all features.
Fixtures work well, parts of rails test. Issues generally YAML rather than DB issues.
2. Native Extentions
Option 1 - Use something else, aka don't do it.
Option 2 - Use an equivalent Java library. (binds you to jruby)
Option 3 - Port the library yourself.
Option 4 - Port by wrapping a java library.
3. Deployment Options
- Mongrel: works, but not the most efficient
- Existing WebApp Server: good concurrency, clustering, resource pooling
- Grizzly/GlassFish v3 option: lightweight, gem mongrel-like install

Monday, January 22, 2007

Cascading Drop-downs in Rails

We were looking for a solid example of cascading dynamic drop-down select lists to use in our rails application, and found the web sorely lacking in solid examples. We found a very good start at http://www.railsweenie.com/forums/2/topics/767 but it wasn't complete enough, or didn't work entirely. So my very good buddy and co-worker Sheri and I figured this out, and got it working for our app, and wanted to document it here in the hopes that it helps someone else. Here's the nutshell version:

It's probably already there, but make sure this line is in your standard_layout.rhtml:

<%= javascript_include_tag :defaults %>

Add this function def to application_helper.rb:

def update_select_box( target_dom_id, collection, options={} )
# Set the default options
options[:text] ||= 'name'
options[:value] ||= 'id'
options[:include_blank] ||= true
options[:clear] ||= []
pre = options[:include_blank] ? [['','']] : []
out = "update_select_options( $('" << onclick="BLOG_clickHandler(this)" class="blsp-spelling-error" id="SPELLING_ERROR_5">dom_id.to_s << "'),"
out << "#{(pre + collection.collect{ |c| [c.send(options[:text]), c.send(options[:value])]}).to_json}" << ","
out << "#{options[:clear].to_json} )"
end

This calls update_select_options which needs to go into application.js:

function update_select_options( target, opts_array, clear_select_list ) {

if( $(target).type.match("select" ) ){ // Confirm the target is a select box

// Remove existing options from the target and the clear_select_list
clear_select_list[clear_select_list.length] = target // Include the target in the clear list

for( k=0;k <>
obj = $(clear_select_list[k]);
if( obj.type.match("select") ){
len = obj.childNodes.length;
for( var i=0;i <>
}
}

// Populate the new options
for(i=0;i <>
o = document.createElement( "option" );
o.appendChild( document.createTextNode( opts_array[i][0] ) );
o.setAttribute( "value", opts_array[i][1] );
obj.appendChild(o);
}
}
}

Add something like this to the form.rhtml (changing the name of the observable field as appropriate):

<%= observe_field 'item[facility_id]', :frequency => 0.5,
:update => 'location_id', :url =>
{ :controller => 'item', :action=> 'refreshLocation' },
:with => "'facility_id=' + escape(value)" %>

Add something like this to the controller:

def refreshLocation
@facilities = Facility.find(:all)
@facility = Facility.find(params[:facility_id])
@locations = Location.find_all_by_facility_id(params[:facility_id])
render :update do |page|
page << text =""> :description} )
end
end


This tidbit in the form.rhtml is the ultimate target of all this work (this is the drop down we want to refresh)






<%= select_tag "item[location_id]", options_from_collection_for_select(@locations,:id,:description) %>



If I missed any code attributions from the various sources we pieced this together from, I'm sorry. Write me and I'll make good and give attribution where appropriate.

If you have any questions about this, post 'em and I'll take my best shot at answering.

And finally, thanks Sheri! Couldn't have done it without you!

Sunday, December 10, 2006

Rails Projects Abound

Okay, abound is a little strong. But we've got one new one and one to-finish Ruby on Rails project on our plates at work. The new project is a quickie tracking app for all inventory items containing human or animal tissue that get used in the operating rooms. Apparently, it's a JCAHO requirement that, as of January 1, 2007, all hospitals have to track all items that contain any amount of tissue - we've always had to track tissue-based implants: heart valves, bone chips, etc. The idea that was being floated was for a MS Access app, but I've grown way too tired of supporting those, plus with Rails we can now code a web app as fast as we can do a desktop Access app, all things being equal.

Politically speaking, it'll be very cool for us to pull this off in a two-week sprint, and as simple and straight-forward as it is, I'm thinking we'll be done much sooner, even with teaching Ruby and Rails to a couple of long-term Smalltalk/Java/Struts programmers. Technically speaking, the more Rails code we can get in production, the happier I'll be.

The other Rails app on our near horizon is to spend some time finishing Tart, the request management app that I've been learning and twiddling with for over a year now. It's an sort of an issue tracker that incorporates multiple approvals (IRB and departmental), and is tuned to our request process. It'll be good to get it launched and out there in people's hands.

Oh, the update I promised you in the last post? She interviewed very well, blew the socks off everyone she met, and she started on our team last week.

-Bill


Tags: ,

Wednesday, October 25, 2006

Falling in love all over again

You know, when you're hiring, and the perfect resume comes across your desk, you know it? Well, it happened today. Literally the perfect candidate fell into our laps. I talked to her, and she sounds like exactly (really, exactly.) what I'm looking for. She's got struts experience. She's got hibernate experience. She's got a serious value for test-driven development. Hell, she's got values as a programmer! She likes, understands, and wants to work in an Agile shop. From talking to her she's got a pretty laid-back personality. I think I'm in love.

Her interview is set up for Friday, and I'm really curious to see how she's going to click with the team.

Well, world.... I'll keep you posted on how it goes.

-Bill

Monday, September 04, 2006

Team Building as Magic

I've recently been tasked with staffing our group back up to it's fullest potential. Recent comings and goings have left us with 2 staffed and 4 open positions, not including mine. As I've been wading through resumes and talking with candidates something struck me: building a team from near-scratch like this has a great resemblance to the strategic card games Magic and Pokemon. To start a game in one of these, you sort through your collection of cards (offensive, defensive, special use, environmental, etc.) and build a game deck of 15 or so cards that you play that game with. Part of the gameplay is luck, but far more of it is strategy, hinging on how you pick what cards go in your deck and how you play those cards as the game progresses.

Building a team for success at work is no different. I have dozens of likely candidates as potential choices, each with their technical and personal strengths and weaknesses. The team that I put together will determine our success over the coming months and years. In addition to core technical competencies, I'm putting emphasis on professional developer talents such as testing and agile team experiences, and I'm banking that that emphasis will serve us well. Other traits I'm finding valuable are a passion for craft, meaning I want people to be passionate about doing things the right way (over just getting them done), and people skills over technical skills, meaning that I don't want an uber-guru who keeps everyone around themselves ticked-off or can't communicate. Ideally you want someone with all these skills and all the ideal traits in one package (or even a whole team of folks like that), but those folks as rare and hard to find as heros in Magic. I think a solid heterogeneous team with just the right blend of skills in a team that can work well together is far more realistic and achievable.

I'm off to build my winning deck.