Okay, abound is a little strong. But we've got one new one and one to-finish Ruby on Rails project on our plates at work. The new project is a quickie tracking app for all inventory items containing human or animal tissue that get used in the operating rooms. Apparently, it's a JCAHO requirement that, as of January 1, 2007, all hospitals have to track all items that contain any amount of tissue - we've always had to track tissue-based implants: heart valves, bone chips, etc. The idea that was being floated was for a MS Access app, but I've grown way too tired of supporting those, plus with Rails we can now code a web app as fast as we can do a desktop Access app, all things being equal.
Politically speaking, it'll be very cool for us to pull this off in a two-week sprint, and as simple and straight-forward as it is, I'm thinking we'll be done much sooner, even with teaching Ruby and Rails to a couple of long-term Smalltalk/Java/Struts programmers. Technically speaking, the more Rails code we can get in production, the happier I'll be.
The other Rails app on our near horizon is to spend some time finishing Tart, the request management app that I've been learning and twiddling with for over a year now. It's an sort of an issue tracker that incorporates multiple approvals (IRB and departmental), and is tuned to our request process. It'll be good to get it launched and out there in people's hands.
Oh, the update I promised you in the last post? She interviewed very well, blew the socks off everyone she met, and she started on our team last week.
-Bill
Tags: RubyOnRails, Rails
Sunday, December 10, 2006
Wednesday, October 25, 2006
Falling in love all over again
You know, when you're hiring, and the perfect resume comes across your desk, you know it? Well, it happened today. Literally the perfect candidate fell into our laps. I talked to her, and she sounds like exactly (really, exactly.) what I'm looking for. She's got struts experience. She's got hibernate experience. She's got a serious value for test-driven development. Hell, she's got values as a programmer! She likes, understands, and wants to work in an Agile shop. From talking to her she's got a pretty laid-back personality. I think I'm in love.
Her interview is set up for Friday, and I'm really curious to see how she's going to click with the team.
Well, world.... I'll keep you posted on how it goes.
-Bill
Her interview is set up for Friday, and I'm really curious to see how she's going to click with the team.
Well, world.... I'll keep you posted on how it goes.
-Bill
Monday, September 04, 2006
Team Building as Magic
I've recently been tasked with staffing our group back up to it's fullest potential. Recent comings and goings have left us with 2 staffed and 4 open positions, not including mine. As I've been wading through resumes and talking with candidates something struck me: building a team from near-scratch like this has a great resemblance to the strategic card games Magic and Pokemon. To start a game in one of these, you sort through your collection of cards (offensive, defensive, special use, environmental, etc.) and build a game deck of 15 or so cards that you play that game with. Part of the gameplay is luck, but far more of it is strategy, hinging on how you pick what cards go in your deck and how you play those cards as the game progresses.
Building a team for success at work is no different. I have dozens of likely candidates as potential choices, each with their technical and personal strengths and weaknesses. The team that I put together will determine our success over the coming months and years. In addition to core technical competencies, I'm putting emphasis on professional developer talents such as testing and agile team experiences, and I'm banking that that emphasis will serve us well. Other traits I'm finding valuable are a passion for craft, meaning I want people to be passionate about doing things the right way (over just getting them done), and people skills over technical skills, meaning that I don't want an uber-guru who keeps everyone around themselves ticked-off or can't communicate. Ideally you want someone with all these skills and all the ideal traits in one package (or even a whole team of folks like that), but those folks as rare and hard to find as heros in Magic. I think a solid heterogeneous team with just the right blend of skills in a team that can work well together is far more realistic and achievable.
I'm off to build my winning deck.
Building a team for success at work is no different. I have dozens of likely candidates as potential choices, each with their technical and personal strengths and weaknesses. The team that I put together will determine our success over the coming months and years. In addition to core technical competencies, I'm putting emphasis on professional developer talents such as testing and agile team experiences, and I'm banking that that emphasis will serve us well. Other traits I'm finding valuable are a passion for craft, meaning I want people to be passionate about doing things the right way (over just getting them done), and people skills over technical skills, meaning that I don't want an uber-guru who keeps everyone around themselves ticked-off or can't communicate. Ideally you want someone with all these skills and all the ideal traits in one package (or even a whole team of folks like that), but those folks as rare and hard to find as heros in Magic. I think a solid heterogeneous team with just the right blend of skills in a team that can work well together is far more realistic and achievable.
I'm off to build my winning deck.
Friday, July 28, 2006
OSCON 06 Day 5 - Morning Sessions
Session 1
Open Source Performance Monitoring Tools, Tips and Tricks for Java
Matt Secoske
matt@secosoft.net
Your project requires performance monitoring/planning/goals when your business requires it. You can do this through profiling, which is a focused look at system execution. To plan for performance: 1) determine your goals, 2) create testing scenarios, 3) determine monitoring / profiling needs. 4) Integrate this into your development process, and finally 5) integrate it into your production environment.
In planning, you need to know:
and what you want to monitor:
hardware (web/app/db servers) cpu, memory, cache hit %, disk and network speed.
java specific (gc, app metrics)
JUnitPerf decorates JUnit test, great for benchmarking particular tests or test cases while refactoring, it's good at continuous performance testing, but not so good as a deployed monitoring solution. It's also available in JUnit 4.0 as an annotation.
The Grinder is a clusterable performance tester. It can do stress, load, capacity and functional testing. It can proxy traffic for recording and playback later as part of tests.
Apache JMeter does stress, load, capacity and functional testing. It's not clusterable and doesn't do proxy recording. It does have a plugin architecture for customization.
Log file analysis is another way to monitor performance. Since you're writing to disk it can have a noticeable negative impact on the performance you're trying to monitor. It requires changes to your source code, and doesn't accurately reflect how expensive your operation was, only the time required to execute it. If you're committed to logging, Aspects is a recommended way of doing it (AspectJ, AspectWerkz, Java Interactive Profiler, GlassBox Inspector).
JFluid / NetBeans Profiler - part of Sun's new JVM profiling tool, and also part of the NetBeans Profiler Extension. It supports local and remote profiling, and provides limited JVM support (mainly 5.0 and up).
On the other side, there's Eclipse TPTP. It does local and remote profiling, and requires a JVM agent for remote use.
Matt then did a brief demo of TPT in Eclipse, and a deeper demo of the NetBeans profiler, exploring many of the in's and out's.
He closed with a few tips and tricks:
Session 2
Hacking Your Home Phone System (Year 2) - aka. Does the Phone Work Today?
Brian Aker
Terms:
PBX - private branch exchange
FXO - receives signal
FXS - generates signal
DMARK - demarcation point
PSTN - public switched telephone network
On wiring: put boxes in every wall, with 3/4 conduit (the largest you can fit
in a wall). Metal boxes survive better and are easier to mount. Run
electricity separately and cross at 90deg angles. More than 360deg worth of
conduit turns makes it hard to run cable later. Finally, don't leave boxes
empty for inspections - it worries inspectors.
Asterisk is an open source project that creates all sorts of phone
functionality in software (voicemail, conferencing, PBX, etc.) Digium makes
the best hardware (and funds Asterisk development). Skip the cheaper cards, and
get the TDM400P cards. Be careful of the FXO / FXS ports, as plugging the
wrong one into the wrong place can blow up the card.
Phone Instruments
BudgetTone: $40, cheap, cheap sounding, voice CID
Snom 190: $299, entirely scriptable via web services, can talk to multiple voip
services, no buttons for lines
Polycom SoundPoint: $200, good sound, buttons for lines, PoE, slow to boot
Analog to Digital Devices
Sipura 2000: 2port FXS analog adapter, $70
Sipura 3000: FXS/FXO bi-directional adapters, $70
The Computers: Any linux box will work.
The Software: Asterisk is hard to setup. The extensions file is the key to it
all. Plan on it taking a while.
The next generation is Asterisk@Home. The CD boots, wipes the drives, and
installs a working Asterisk based on Centos. It probably needs to be tweaked
after it installs, but it works. Recently they changed the name of the product
to Trixbox (which comes with SugarCRM built in).
High-availability: MySQL Cluster - set up a second machine, cluster them.
Mashups:
Hardware Vendors
Digium - http://www.digium.com/
Polycom - http://www.polycom.com/
Snom - http://www.snom.com/
Sipura - http://www.sipura.com/
Links
http://voip-info.org/
http://www.asterisk.org/
http://www.planetasterisk.org/
FreeWorld Dialup - http://www.freeworlddialup.com/
http://krow.livejournal.com/ <- Speaker's blog.
-Bill
Tags: OSCON06, Asterisk
Open Source Performance Monitoring Tools, Tips and Tricks for Java
Matt Secoske
matt@secosoft.net
Your project requires performance monitoring/planning/goals when your business requires it. You can do this through profiling, which is a focused look at system execution. To plan for performance: 1) determine your goals, 2) create testing scenarios, 3) determine monitoring / profiling needs. 4) Integrate this into your development process, and finally 5) integrate it into your production environment.
In planning, you need to know:
- expected total number of clients
- expected peak total number of clients
- most common tasks these clients will be doing
- acceptable response time
- how long will the data stay around
and what you want to monitor:
hardware (web/app/db servers) cpu, memory, cache hit %, disk and network speed.
java specific (gc, app metrics)
JUnitPerf decorates JUnit test, great for benchmarking particular tests or test cases while refactoring, it's good at continuous performance testing, but not so good as a deployed monitoring solution. It's also available in JUnit 4.0 as an annotation.
The Grinder is a clusterable performance tester. It can do stress, load, capacity and functional testing. It can proxy traffic for recording and playback later as part of tests.
Apache JMeter does stress, load, capacity and functional testing. It's not clusterable and doesn't do proxy recording. It does have a plugin architecture for customization.
Log file analysis is another way to monitor performance. Since you're writing to disk it can have a noticeable negative impact on the performance you're trying to monitor. It requires changes to your source code, and doesn't accurately reflect how expensive your operation was, only the time required to execute it. If you're committed to logging, Aspects is a recommended way of doing it (AspectJ, AspectWerkz, Java Interactive Profiler, GlassBox Inspector).
JFluid / NetBeans Profiler - part of Sun's new JVM profiling tool, and also part of the NetBeans Profiler Extension. It supports local and remote profiling, and provides limited JVM support (mainly 5.0 and up).
On the other side, there's Eclipse TPTP. It does local and remote profiling, and requires a JVM agent for remote use.
Matt then did a brief demo of TPT in Eclipse, and a deeper demo of the NetBeans profiler, exploring many of the in's and out's.
He closed with a few tips and tricks:
- Put in just enough metrics to get your performance measurements
- Performance Test != Production
- Real world data + real world usage patters + near-production environment = accurate benchmarks
- Keep a little monitoring in production.
Session 2
Hacking Your Home Phone System (Year 2) - aka. Does the Phone Work Today?
Brian Aker
Terms:
PBX - private branch exchange
FXO - receives signal
FXS - generates signal
DMARK - demarcation point
PSTN - public switched telephone network
On wiring: put boxes in every wall, with 3/4 conduit (the largest you can fit
in a wall). Metal boxes survive better and are easier to mount. Run
electricity separately and cross at 90deg angles. More than 360deg worth of
conduit turns makes it hard to run cable later. Finally, don't leave boxes
empty for inspections - it worries inspectors.
Asterisk is an open source project that creates all sorts of phone
functionality in software (voicemail, conferencing, PBX, etc.) Digium makes
the best hardware (and funds Asterisk development). Skip the cheaper cards, and
get the TDM400P cards. Be careful of the FXO / FXS ports, as plugging the
wrong one into the wrong place can blow up the card.
Phone Instruments
BudgetTone: $40, cheap, cheap sounding, voice CID
Snom 190: $299, entirely scriptable via web services, can talk to multiple voip
services, no buttons for lines
Polycom SoundPoint: $200, good sound, buttons for lines, PoE, slow to boot
Analog to Digital Devices
Sipura 2000: 2port FXS analog adapter, $70
Sipura 3000: FXS/FXO bi-directional adapters, $70
The Computers: Any linux box will work.
The Software: Asterisk is hard to setup. The extensions file is the key to it
all. Plan on it taking a while.
The next generation is Asterisk@Home. The CD boots, wipes the drives, and
installs a working Asterisk based on Centos. It probably needs to be tweaked
after it installs, but it works. Recently they changed the name of the product
to Trixbox (which comes with SugarCRM built in).
High-availability: MySQL Cluster - set up a second machine, cluster them.
Mashups:
- Front-door solenoid: unlock your doors via your phone
- ipkall: free phone numbers!! (dial-a-song, a special number for certain
callers) - dial-a-monkey network
- Livejournal's mod_mp3: good for creative content, freaks out business people
- AIM Bot: with the follow-to-phone feature, it sends IM messages when messages arrive (configurable for certain folks, if that suits you.
Hardware Vendors
Digium - http://www.digium.com/
Polycom - http://www.polycom.com/
Snom - http://www.snom.com/
Sipura - http://www.sipura.com/
Links
http://voip-info.org/
http://www.asterisk.org/
http://www.planetasterisk.org/
FreeWorld Dialup - http://www.freeworlddialup.com/
http://krow.livejournal.com/ <- Speaker's blog.
-Bill
Tags: OSCON06, Asterisk
OSCON 06 Day 4 - Afternoon Sessions
Session 3
Outer Joins for Fun and Profit
Bill Karwin
I've never fully understood Outer Joins. Although I can use them, I'm far more comfortable falling back to the (+) from Oracle and += from MSSQL to get what I want. Bill started explaining outer joins with set theory and Venn diagrams, which open source databases support which kind of outer joins. Then Bill launched into several examples, each explaining concepts of increasing complexity. Beyond the obvious stuff, you can do things like mimic a NOT IN subquery on platforms (MySql4.0) that doesn't support subqueries. You can also do a 'Greatest row per group' subquery without having to use the max() function. This lets you return more rows than just the max'd one. In english, its "show the row for which no other row exists with a greater date for the same product".
One of the more interesting demo/solution bits he showed was sudoku solving with SQL (using outer joins, of course...).
Session 4
Using the Google Web Toolkit
Bruce Johnson and Bret Taylor
Ajax, the same old arguments: nothing to download, every app is only a URL
away, desktop-like functionality. In reality, it auto-reinstalls every
full-page load, so it'd better be small. It's sorta secure; it's so dang hard
to get it working at all that security is almost an afterthought. There's a
plethora of technologies, on it's own platforms with it's own quirks - it's no
wonder developers hate Ajax.
Looking at Ajax from a Java viewpoint, Google set out to leverage their Java
knowledge, and make Ajax front-ends that are still very webby. They came up
with the idea of translating Java into Javascript. They actually pulled this
off, and it really works. GWT can run in two mode: hosted mode (the whole app
runs in the jvm as java - this gives you java-based debugging - which is
beautiful and useful!) and native mode (that runs in a os-native web browser).
GWT moves all the stateful session logic to the client. This enables stateless
load balancing and (thus) server clustering. It also leaves as much UI-only
stuff on the client, requiring no server round-trip. GWT provides leverage for
the solutions - you can take any solution and wrap it in a class and have it
to reuse later.
GWT, as Java, brings static type checking (static types... lalalalala....) and
all the benefits and drawbacks that typically come with it. There are code reuse advantages: create your ajax libraries as jars, reuse your code.
They provide a fantastic palette of UI widgets that make cross-platform rich
UI creation pretty simple, and one less thing to worry about.
The Web Toolkit comes along with a RPC library as well. It's dead simple, and
lets you create objects remotely, and serialize them back to the browser.
History and linking are ususally a casualty of Ajax. AWT lets you (with a pretty simple set
of calls) provide a full history support and linking. It really is dead
simple!
But wait! There's more!! There's JUnit support for testing of both the Java
and Javascript sides!
Downloadable at http://code.google.com/webtoolkit/
and the GWT Widget Library at http://gwt-widget.sourceforge.net/ and more.
Wow. GWT is cool. I can't wait to get it and play with it.
Session 5
Taming an Audience with Laser and Snake
Robert Stephenson
The basic idea is that you have a webcam, an iSight in Robert's case, and a
laserpointer. You point at the screen with the laserpointer, and the iSight
and your software work like a mouse to control the computer running the
presentation. In Robert's case, he leveraged the AppKit (a python interface to
Apple system calls) to call the CocoaSequenceGrabber to capture frames. Robert
chose a set of RGB factors to pick an appropriate range of laser colors to
have the app look for.
-1.1r + 2.0g -1.1b > 0.9
But this didn't work so well. The problem is that that there's not enough
resolution in the sRGB color space for the camera to be able to resolve the
dot of the laser pointer. And/or there's not enough range in the sRGB gamma
when ends up clipping the intensity of the laser dot. Owing to the way that
the camera's automatic gain control tries to normalize the camera exposure,
Robert tweaked the camera's sensitivity settings to compensate and was
able to reliably track the laser about the screen.
He then, with some ObjectiveC, C, Python and AppleScript, he was able to
control PowerPoint (or was it Keynote). Robert also added capabilties to
ajust keystoning and skewing of the image for the camera.
Session 7
Web Heresies
Avi Bryany
Seaside is a web development framework written in Smalltalk. Chances are
you're not going to go out and use Seaside. That's okay with Avi. He just
wants to talk about the heresies of web development. Avi's first idea is that
the HTML files belong to the developer, and that designers work with the CSS
files. He did a brief apples to apples comparison of language semantics so we
non Smalltalk people could read his examples. Naming things, while a good idea
when absolutely necessary, should be avoided if it can. You can use the object
that the field is rendering for to source it's own information. If you don't
name your objects, they get numbers, and a behind-the-scenes hash maps the
numbers back to their objects and their accessors for each numbered field.
Kinda confusing, but I can see the reasons why this would be a good idea.
An aside from Avi's talk: Seaside is based on Squeak Smalltalk, which I've been meaning to check out.
From my Smalltalk days, grumble11grumble years ago, I remember the Virtual
machine being HUGE and somewhat slow, and hard to crack for outside use. I'm
really curious to give Seaside a shot, even if just to see how it does what it
does. I'm intreagued as heck about how a Smalltalk web app server would fit
together and work.
Avi went on to cover a few demos, showing source code, and running a few hello
world-type things. A question that came up (as part of his talk) is how would
you serialize a session? The short answer is that you don't. Just use session
affinity instead. The cost of a server failing and taking a session with it
tends to be very low, so (essentially) screw it. Use sessions like crazy. Put
anything in a session you want. Really. Anytime. (I'm guessing this is another
herasy.) Try on the idea that you can also save the current execution point
and put it in the session. This would mean that you can throw the current
execution stack in the session (and in this case putting it in the URL) as the
form is displayed. When the form gets posted, you pull the execution stack out
of the session and continue execution from where it was, thus processing the
form. Think about that one and let that one sink in. Does your brain hurt yet?
It should. (This overhead isn't that much, but amounts to about 1M / current
user.)
Another Seaside goodie is that you can change your page load order by changing
the method order in your Smalltalk code. Relying on Smalltalk gives you some
really interesting powers that I've never seen in web applications before.
Your server image is still a fairly full-featured image, including the ability
to vnc into the sever image and interactively debug it. That's another "Think
about it." moment. That's cool.
Q: What's the lineage of Seaside?
A: WebObjects + Paul Graham = Seaside. It's also similar to Tapestry.
Q: What have been the changes to Seaside over time?
A: Seaside used to have a templating system. That was ditched 3 years
or so ago in favor of the programamtic html creation. There were also a few
minor architectural changes over time as well.
Q: Ajax?
A: Yes! http://scriptaculous.seaside.st/
-Bill
Tags: OSCON06, SQL, Outer Joins, Smalltalk,seaside,GWT,Google Web Toolkit
Outer Joins for Fun and Profit
Bill Karwin
I've never fully understood Outer Joins. Although I can use them, I'm far more comfortable falling back to the (+) from Oracle and += from MSSQL to get what I want. Bill started explaining outer joins with set theory and Venn diagrams, which open source databases support which kind of outer joins. Then Bill launched into several examples, each explaining concepts of increasing complexity. Beyond the obvious stuff, you can do things like mimic a NOT IN subquery on platforms (MySql4.0) that doesn't support subqueries. You can also do a 'Greatest row per group' subquery without having to use the max() function. This lets you return more rows than just the max'd one. In english, its "show the row for which no other row exists with a greater date for the same product".
One of the more interesting demo/solution bits he showed was sudoku solving with SQL (using outer joins, of course...).
Session 4
Using the Google Web Toolkit
Bruce Johnson and Bret Taylor
Ajax, the same old arguments: nothing to download, every app is only a URL
away, desktop-like functionality. In reality, it auto-reinstalls every
full-page load, so it'd better be small. It's sorta secure; it's so dang hard
to get it working at all that security is almost an afterthought. There's a
plethora of technologies, on it's own platforms with it's own quirks - it's no
wonder developers hate Ajax.
Looking at Ajax from a Java viewpoint, Google set out to leverage their Java
knowledge, and make Ajax front-ends that are still very webby. They came up
with the idea of translating Java into Javascript. They actually pulled this
off, and it really works. GWT can run in two mode: hosted mode (the whole app
runs in the jvm as java - this gives you java-based debugging - which is
beautiful and useful!) and native mode (that runs in a os-native web browser).
GWT moves all the stateful session logic to the client. This enables stateless
load balancing and (thus) server clustering. It also leaves as much UI-only
stuff on the client, requiring no server round-trip. GWT provides leverage for
the solutions - you can take any solution and wrap it in a class and have it
to reuse later.
GWT, as Java, brings static type checking (static types... lalalalala....) and
all the benefits and drawbacks that typically come with it. There are code reuse advantages: create your ajax libraries as jars, reuse your code.
They provide a fantastic palette of UI widgets that make cross-platform rich
UI creation pretty simple, and one less thing to worry about.
The Web Toolkit comes along with a RPC library as well. It's dead simple, and
lets you create objects remotely, and serialize them back to the browser.
History and linking are ususally a casualty of Ajax. AWT lets you (with a pretty simple set
of calls) provide a full history support and linking. It really is dead
simple!
But wait! There's more!! There's JUnit support for testing of both the Java
and Javascript sides!
Downloadable at http://code.google.com/webtoolkit/
and the GWT Widget Library at http://gwt-widget.sourceforge.net/ and more.
Wow. GWT is cool. I can't wait to get it and play with it.
Session 5
Taming an Audience with Laser and Snake
Robert Stephenson
The basic idea is that you have a webcam, an iSight in Robert's case, and a
laserpointer. You point at the screen with the laserpointer, and the iSight
and your software work like a mouse to control the computer running the
presentation. In Robert's case, he leveraged the AppKit (a python interface to
Apple system calls) to call the CocoaSequenceGrabber to capture frames. Robert
chose a set of RGB factors to pick an appropriate range of laser colors to
have the app look for.
-1.1r + 2.0g -1.1b > 0.9
But this didn't work so well. The problem is that that there's not enough
resolution in the sRGB color space for the camera to be able to resolve the
dot of the laser pointer. And/or there's not enough range in the sRGB gamma
when ends up clipping the intensity of the laser dot. Owing to the way that
the camera's automatic gain control tries to normalize the camera exposure,
Robert tweaked the camera's sensitivity settings to compensate and was
able to reliably track the laser about the screen.
He then, with some ObjectiveC, C, Python and AppleScript, he was able to
control PowerPoint (or was it Keynote). Robert also added capabilties to
ajust keystoning and skewing of the image for the camera.
Session 7
Web Heresies
Avi Bryany
Seaside is a web development framework written in Smalltalk. Chances are
you're not going to go out and use Seaside. That's okay with Avi. He just
wants to talk about the heresies of web development. Avi's first idea is that
the HTML files belong to the developer, and that designers work with the CSS
files. He did a brief apples to apples comparison of language semantics so we
non Smalltalk people could read his examples. Naming things, while a good idea
when absolutely necessary, should be avoided if it can. You can use the object
that the field is rendering for to source it's own information. If you don't
name your objects, they get numbers, and a behind-the-scenes hash maps the
numbers back to their objects and their accessors for each numbered field.
Kinda confusing, but I can see the reasons why this would be a good idea.
An aside from Avi's talk: Seaside is based on Squeak Smalltalk, which I've been meaning to check out.
From my Smalltalk days, grumble11grumble years ago, I remember the Virtual
machine being HUGE and somewhat slow, and hard to crack for outside use. I'm
really curious to give Seaside a shot, even if just to see how it does what it
does. I'm intreagued as heck about how a Smalltalk web app server would fit
together and work.
Avi went on to cover a few demos, showing source code, and running a few hello
world-type things. A question that came up (as part of his talk) is how would
you serialize a session? The short answer is that you don't. Just use session
affinity instead. The cost of a server failing and taking a session with it
tends to be very low, so (essentially) screw it. Use sessions like crazy. Put
anything in a session you want. Really. Anytime. (I'm guessing this is another
herasy.) Try on the idea that you can also save the current execution point
and put it in the session. This would mean that you can throw the current
execution stack in the session (and in this case putting it in the URL) as the
form is displayed. When the form gets posted, you pull the execution stack out
of the session and continue execution from where it was, thus processing the
form. Think about that one and let that one sink in. Does your brain hurt yet?
It should. (This overhead isn't that much, but amounts to about 1M / current
user.)
Another Seaside goodie is that you can change your page load order by changing
the method order in your Smalltalk code. Relying on Smalltalk gives you some
really interesting powers that I've never seen in web applications before.
Your server image is still a fairly full-featured image, including the ability
to vnc into the sever image and interactively debug it. That's another "Think
about it." moment. That's cool.
Q: What's the lineage of Seaside?
A: WebObjects + Paul Graham = Seaside. It's also similar to Tapestry.
Q: What have been the changes to Seaside over time?
A: Seaside used to have a templating system. That was ditched 3 years
or so ago in favor of the programamtic html creation. There were also a few
minor architectural changes over time as well.
Q: Ajax?
A: Yes! http://scriptaculous.seaside.st/
-Bill
Tags: OSCON06, SQL, Outer Joins, Smalltalk,seaside,GWT,Google Web Toolkit
Thursday, July 27, 2006
OSCON 06 Day 4 - Morning Sessions
OSCON '06 - Thursday Morning Sessions
Session 1
Building Rails to Legacy Applications
Robert Treat
Of course greenfield applications are the easiest to use in Rails, but what about existing schema? How do you get rails to coexist and work successfully with Rails?
One option is to use your database to help fix the problem. Views can make legacy schema look far more rails-friendly. He demo'd that in PostgreSQL, you can create a rule that lets view write-backs work. Once you set these insert/update rules, you can treat your view as rails-friendly tables and thing should be far easier.
Using the database requires less knowledge of Ruby and Rails, and it does make things look like more what you expect to see from book examples. It does have the major drawback of having to support two schema.
The other option is to make rails smarter about how it That is, you can use ruby code to manipulate Active Record to match our data model. First off, edit your environment.rb to set
Slides at http://www.brighterlamp.org/
Session 2
I'm 200, You're 200
David Sklar
sklar@sklar.com
In Web 2.0, we make lots of assumptions about other services being up, and who we can depend on and who we can't. So, what is your app dependent on? What contingencies do you have if it's not there.
Dependency is needinging something that's not yours (eg. physical control, organizational control, intellectual control). This includes server dependencies (content, per-machine hardware and software, internal- and external-network calls, content created by others), code dependencies (who wrote it, how it works, who wrote the documentation, who knows where the documentation is wrong.), business dependencies (who supplies your feed, how many of your folks are in the National Guard, what are your copyright and patent risk, what's in your SLAs and are the penalties helpful?).
You can mitigate real-time dependencies with planned modes of degradation based on data freshness, application features and read/write data. Avoid live web service calls when possible, instead making calls offline, which lets you sanity check and cache the results. You can also create a local data store for when the remote service has blips. To degrade features gracefully segment your app into non-interdependent parts. Further, you can build code with the idea of having pluggable external dependencies - eg use map provider Y instead of G, or ad protocol G instead of X. APP, S3 and "ad html" are already starting to fill that need. Monitoring your dependencies to know when to switch needs to happen.
In summary, examine your dependencies and your risk exposure, and plan for and mitigate those risks accordingly.
Slides at http://www.sklar.com/
-Bill
Tags: OSCON06, Ruby on Rails
Session 1
Building Rails to Legacy Applications
Robert Treat
Of course greenfield applications are the easiest to use in Rails, but what about existing schema? How do you get rails to coexist and work successfully with Rails?
One option is to use your database to help fix the problem. Views can make legacy schema look far more rails-friendly. He demo'd that in PostgreSQL, you can create a rule that lets view write-backs work. Once you set these insert/update rules, you can treat your view as rails-friendly tables and thing should be far easier.
Using the database requires less knowledge of Ruby and Rails, and it does make things look like more what you expect to see from book examples. It does have the major drawback of having to support two schema.
The other option is to make rails smarter about how it That is, you can use ruby code to manipulate Active Record to match our data model. First off, edit your environment.rb to set
pluralize_table_names = false. Another adaptation to make is to tell your model to use a differently-named primary key. That way rails knows to look for foo_id rather than just id.
Slides at http://www.brighterlamp.org/
Session 2
I'm 200, You're 200
David Sklar
sklar@sklar.com
In Web 2.0, we make lots of assumptions about other services being up, and who we can depend on and who we can't. So, what is your app dependent on? What contingencies do you have if it's not there.
Dependency is needinging something that's not yours (eg. physical control, organizational control, intellectual control). This includes server dependencies (content, per-machine hardware and software, internal- and external-network calls, content created by others), code dependencies (who wrote it, how it works, who wrote the documentation, who knows where the documentation is wrong.), business dependencies (who supplies your feed, how many of your folks are in the National Guard, what are your copyright and patent risk, what's in your SLAs and are the penalties helpful?).
You can mitigate real-time dependencies with planned modes of degradation based on data freshness, application features and read/write data. Avoid live web service calls when possible, instead making calls offline, which lets you sanity check and cache the results. You can also create a local data store for when the remote service has blips. To degrade features gracefully segment your app into non-interdependent parts. Further, you can build code with the idea of having pluggable external dependencies - eg use map provider Y instead of G, or ad protocol G instead of X. APP, S3 and "ad html" are already starting to fill that need. Monitoring your dependencies to know when to switch needs to happen.
In summary, examine your dependencies and your risk exposure, and plan for and mitigate those risks accordingly.
Slides at http://www.sklar.com/
-Bill
Tags: OSCON06, Ruby on Rails
Wednesday, July 26, 2006
OSCON 06 Day 3 - Afternoon Sessions
Session 1
Driving Rails Deep into the Back Office
Fernandez
Financial presures to reduce costs are big.
Be ready for the build vs. buy decision because it will come up.
Obstacles: "good enough" legacy systems, BI tools, reporting packages/
Better and faster in rails? Yes!
One idea: the trojan application.
The Race approach: parallel or same teams do the same project in rails and java/.net/whatever.
The Pilot: do a small bit of an app as a proof of concept.
The Rescue: catch a failing project and redo it in rails
The Undercut: (risky) Come it, and because you're so productive you can do it in far less time than other solutions.
Case Study:
the "PCS" Project
-Small team
-Three months
-Replaced homegrown PL/SQL solution
-DSL-centric solution
Lesson 1: optimize and raise your levels of abstraction
How? Know what each piece of the stack does and doing a custom DSL.
Lesson 2: Rails really breathes life into XP
All the benefits of XP really click with Rails.
Lesson 3: Don't sweat performance and scaling. Most back-office sysmts have relatively few users.
Session 2
Streamlined
Stuart Halloway
Relevance LLC
In the beginning, there was ruby and rails...
Along came Streamlined, which replaces the basic scaffolding.
Streamlined is generates pages with a basic stylesheet, ajaxified creation/edit/update/delete, on the fly filtering of all fields, sort by clicking column headers, export to csv/xml,
New 'app/streamlied' directory that contains view code (that looks amazingly like rails models). It picks up object relationships from the rails models, and reflects them in the ajaxy goodness. But the new streamlined directory contains all the view-related stuff and putting it in one place. All the views (.rhtmls) are generated and fully open to being customized. If there's no customization, it
All the CRUD stuff gets refactored from each and every controller up to the streamlined_controller. All are overridable if need be. Or, if deleted entirely, there's an automatic fallback to a functional generic version.
Streamlined is Alpha
0.02 is out now
0.03 is out next Monday
Streamlined produces:
-Production-ready Enterprise scaffolding.
-Generic enterprise CRUD
-Simplicity of ActiveRecord for views and controllers
Generator options fall into 3 categories:
-semantic (--no-relationships, --no-views)
-look (--no-header, --no-about)
-theme (css=vendor.css or whatever)
UI Options
-relationship - choose view and summary
-user_columns - which columns to display
Licensed under MIT license.
Download http://www.streamlinedframework.org/
Documentation: same place
Submit patches: http://collaboa.streamlinedframework.org/
Things to look at:
Session 3
Ruby for Java Programmers
Ugo Cei
Why?
-Leveraging exsiting Java libraries, source and infrastuctre
How?
-RubyJavaBridge http://arton.no-ip.info/collabo/backyard/?RubyJavaBridge
Works best if you wrap Java methods in simpler wrappers.
-SWIG - http://www.swig.org/
Jakarta POI, nifty library for manipulating Microsoft OLE2 Office fiiles uses SWIG to provide Ruby bindings. (http://jakarta.apache.org/poi/poi-ruby.html)
-JRuby - http://jruby/codehaus.org
Not a bridge, but a 100% Java interpreter written in Java
Gives ruby access to all Java libraries.
No access to ruby extensions in C
"Almost" able to run RubyGems and Rails
Quick Development pace.
Still slow compared to C Ruby, but quoting Charles O. Nutter "I think it's now very reasonable to say we should beat C Ruby performance by the end of the year."
Using the IRuby interpreter, you can call Ruby methods from Java.
-XML-RPC - It's possible to run the java and ruby process independently and enable them to communicate.
-SOAP - It's possible to run the java and ruby process independently and enable them to communicate.
Slides at http://www.sourcesense.com/transfer/ruby_for_java_programmers.pdf
-Bill
Tags: OSCON06, OSCON06, ruby, streamlined, jruby, Ruby on Rails
Driving Rails Deep into the Back Office
Fernandez
Financial presures to reduce costs are big.
Be ready for the build vs. buy decision because it will come up.
Obstacles: "good enough" legacy systems, BI tools, reporting packages/
Better and faster in rails? Yes!
One idea: the trojan application.
The Race approach: parallel or same teams do the same project in rails and java/.net/whatever.
The Pilot: do a small bit of an app as a proof of concept.
The Rescue: catch a failing project and redo it in rails
The Undercut: (risky) Come it, and because you're so productive you can do it in far less time than other solutions.
Case Study:
the "PCS" Project
-Small team
-Three months
-Replaced homegrown PL/SQL solution
-DSL-centric solution
Lesson 1: optimize and raise your levels of abstraction
How? Know what each piece of the stack does and doing a custom DSL.
Lesson 2: Rails really breathes life into XP
All the benefits of XP really click with Rails.
Lesson 3: Don't sweat performance and scaling. Most back-office sysmts have relatively few users.
Session 2
Streamlined
Stuart Halloway
Relevance LLC
In the beginning, there was ruby and rails...
Along came Streamlined, which replaces the basic scaffolding.
Streamlined is generates pages with a basic stylesheet, ajaxified creation/edit/update/delete, on the fly filtering of all fields, sort by clicking column headers, export to csv/xml,
New 'app/streamlied' directory that contains view code (that looks amazingly like rails models). It picks up object relationships from the rails models, and reflects them in the ajaxy goodness. But the new streamlined directory contains all the view-related stuff and putting it in one place. All the views (.rhtmls) are generated and fully open to being customized. If there's no customization, it
All the CRUD stuff gets refactored from each and every controller up to the streamlined_controller. All are overridable if need be. Or, if deleted entirely, there's an automatic fallback to a functional generic version.
Streamlined is Alpha
0.02 is out now
0.03 is out next Monday
Streamlined produces:
-Production-ready Enterprise scaffolding.
-Generic enterprise CRUD
-Simplicity of ActiveRecord for views and controllers
Generator options fall into 3 categories:
-semantic (--no-relationships, --no-views)
-look (--no-header, --no-about)
-theme (css=vendor.css or whatever)
UI Options
-relationship - choose view and summary
-user_columns - which columns to display
Licensed under MIT license.
Download http://www.streamlinedframework.org/
Documentation: same place
Submit patches: http://collaboa.streamlinedframework.org/
Things to look at:
- jmatter - naked objects project
Session 3
Ruby for Java Programmers
Ugo Cei
Why?
-Leveraging exsiting Java libraries, source and infrastuctre
How?
-RubyJavaBridge http://arton.no-ip.info/collabo/backyard/?RubyJavaBridge
Works best if you wrap Java methods in simpler wrappers.
- No mappomg for ruby iterators on java collections.
- No date convertors
- Weak ejb getter/setter property converters.
-SWIG - http://www.swig.org/
Jakarta POI, nifty library for manipulating Microsoft OLE2 Office fiiles uses SWIG to provide Ruby bindings. (http://jakarta.apache.org/poi/poi-ruby.html)
-JRuby - http://jruby/codehaus.org
Not a bridge, but a 100% Java interpreter written in Java
Gives ruby access to all Java libraries.
No access to ruby extensions in C
"Almost" able to run RubyGems and Rails
Quick Development pace.
Still slow compared to C Ruby, but quoting Charles O. Nutter "I think it's now very reasonable to say we should beat C Ruby performance by the end of the year."
- Can use ruby "each" on Java collections
- Data type conversions
- Full support for JavaBean properties
Using the IRuby interpreter, you can call Ruby methods from Java.
-XML-RPC - It's possible to run the java and ruby process independently and enable them to communicate.
-SOAP - It's possible to run the java and ruby process independently and enable them to communicate.
Slides at http://www.sourcesense.com/transfer/ruby_for_java_programmers.pdf
-Bill
Tags: OSCON06, OSCON06, ruby, streamlined, jruby, Ruby on Rails
OSCON 06 Day 3 - Morning Keynotes and Sessions
Keynotes
Of the morning keynote speakers, Tim O'Reilly had the most interesting and thought-provoking stuff to share. His talk keyed around 5 points that are he thinks are shaping the Open Source landscape today.
Random links from Tim's talk
Session 1
Metaprogramming Java with HiveMind and Javassist
Howard Lewis Ship
hlship@gmail.com
Metaprogramming - Writing programs to write programs. Traditionally at compiletime: lexx, yacc, XDoclet (javadoc-style comments that affect program build), AspectJ.
Solution 1: Source Code Generation - Generate more source code at build time (XDoclet, ejbc) Awkward to write test to code that may not exists. Has a more complex build cycle.
Solution 2: Aspect-Oriented Programming - Metaprogramming with AspectJ: weave your code in with ohter code in an "aspect". Cotrol how inital code and aspect code works together. It changes your classes at build-time by adding method calls interspersed with your code.
Runtime Metaprogramming: Leave existing classes alone, and create new classes at runtime. Factories use configuraiton to create new classes and instantiate them. Done by Annotations.
HiveMind: Inversion-of-control container, much like Spring. Provides lifecycle to services: injection of dependencies, notification of lifecycle events. Driven by XML configuration. AOL via interceptors wrap implementations.
Javassist: APO library: Locat classes into memory as CtClass, modify them: add, remove, change, convert into Class objects. Don't have to learn bytecode ... Java-like syntax. (Part of JBoss)
HiveMind wrappers: Doesn't change existing classes, lots of proxies. ClassFactory is a simplified API wrapper around ClassFactory. Create new ClassFab instance, and add constructior, interfaces, fields to that instance.
Metaprogramming and Design Patterns - meta makes it easy to build code that implements useful design patterns easily (and more extisibly) on the fly.
-Bill
Session 2
Embedding a Database in the Browser
David Van Couvering
Database Technology Group - Sun Microsystems
david.vancouvering@sun.com
A database? In a browser? Why?
This is useful because....: the mobile user, keep personal data off the server, and it provides a fast local web cache.
What do you need to make this work: Java, embedable, small-footprint, standards compliant, secure, and provide automatic crash recovery.
Why a database? Standard, portable API (ODBC), ACID semantics, flexible data model, powerful query capability, and works with lots of tools, using existing skills.
What is Apache Derby? 100% Java Open Source relational database, http://db.apache.org/derby. 2M jar, with options to get it down to 500k.
Is this AJAX? Well, not really. All data is stored in the local derby implementation. Maybe LJAX?
To actually use this, your code has to be cert signed (to get acces to the local filesystem for the local databae). Can self-sign or get a real cert from the usual places. Your code then needs to be wrapped in a PrivilegedAction block.
Mapping Data to Fields: Java API with XML abstraction, call JDBC from Javascript, Java Persistance API, dojo storage abstraction,
Essentially: SQL results -> XML -> js into DOM fields.
You can even run this off of a USB stick. The derby format is portable and can be ready by any derby.jar. You can encrypt databases for security.
Alternate solutions: mozStorage (browser specific, intended for internal Mozilla use), dojo.storage (with Flash)
Future Directions: Web server in browser, Synchronization, Implement a dojo StorageProvider.
-Bill
Tags: OSCON06
Of the morning keynote speakers, Tim O'Reilly had the most interesting and thought-provoking stuff to share. His talk keyed around 5 points that are he thinks are shaping the Open Source landscape today.
- Architectures of Participation (aka. Web2.0)
- Open Source Licenses are Obsolete
- Asymetric Competition
- Operations as Advantage
- Open Data
"When the best leader leads, the people say 'We did it ourselves!'"
- Lao Tzu
This is about leveraging your community to make your product richer, like Amazon reviews, or Craigslist content.
Maybe a little overstated in its statement, what Tim is saying is that Open Source licenses become irrelevant if the code is changed and run on web server somewhere, and it never redistributed. The community needs to create a similar definition for Open Services.
While big companies have lots of money to throw at problems, it's the small companies that create out-of-the-box solutions that blind-side the big guys. Craigslist, with 20 employees, is #8 the top 10 Web compnies in terms of hits, and is < 1% the size of any of it's listmates.
"Being on someone's platform is becoming the same as being hosted on their infrastructure
- Missed attribution to a MS employee
This, I think is the biggest of them. On an obvious level, it's about mashups. If people can get to your data (via APIs) they can do amazing things that you never dreamed of. On another level, it's about your own data, and where it lives, and who owns it, and how easy it is to get out. If it's on your hard drive, it's yours. If it's on comany X's servers, who really owns it and what happens if comany X blows up, or gets bought, or whatever. What happens to your data then?
Random links from Tim's talk
- Seaside - A web app framework in Squeak Smalltalk
- Ning - A build-your-own-webapp for the masses
- OpenFount Web2.0 app service that creates 2.0ishness via GWT and uses Amazon S3 for storage.
- StumbleUpon - A new way of discovering web sites.
Session 1
Metaprogramming Java with HiveMind and Javassist
Howard Lewis Ship
hlship@gmail.com
Metaprogramming - Writing programs to write programs. Traditionally at compiletime: lexx, yacc, XDoclet (javadoc-style comments that affect program build), AspectJ.
Solution 1: Source Code Generation - Generate more source code at build time (XDoclet, ejbc) Awkward to write test to code that may not exists. Has a more complex build cycle.
Solution 2: Aspect-Oriented Programming - Metaprogramming with AspectJ: weave your code in with ohter code in an "aspect". Cotrol how inital code and aspect code works together. It changes your classes at build-time by adding method calls interspersed with your code.
Runtime Metaprogramming: Leave existing classes alone, and create new classes at runtime. Factories use configuraiton to create new classes and instantiate them. Done by Annotations.
HiveMind: Inversion-of-control container, much like Spring. Provides lifecycle to services: injection of dependencies, notification of lifecycle events. Driven by XML configuration. AOL via interceptors wrap implementations.
Javassist: APO library: Locat classes into memory as CtClass, modify them: add, remove, change, convert into Class objects. Don't have to learn bytecode ... Java-like syntax. (Part of JBoss)
HiveMind wrappers: Doesn't change existing classes, lots of proxies. ClassFactory is a simplified API wrapper around ClassFactory. Create new ClassFab instance, and add constructior, interfaces, fields to that instance.
Metaprogramming and Design Patterns - meta makes it easy to build code that implements useful design patterns easily (and more extisibly) on the fly.
-Bill
Session 2
Embedding a Database in the Browser
David Van Couvering
Database Technology Group - Sun Microsystems
david.vancouvering@sun.com
A database? In a browser? Why?
This is useful because....: the mobile user, keep personal data off the server, and it provides a fast local web cache.
What do you need to make this work: Java, embedable, small-footprint, standards compliant, secure, and provide automatic crash recovery.
Why a database? Standard, portable API (ODBC), ACID semantics, flexible data model, powerful query capability, and works with lots of tools, using existing skills.
What is Apache Derby? 100% Java Open Source relational database, http://db.apache.org/derby. 2M jar, with options to get it down to 500k.
Is this AJAX? Well, not really. All data is stored in the local derby implementation. Maybe LJAX?
To actually use this, your code has to be cert signed (to get acces to the local filesystem for the local databae). Can self-sign or get a real cert from the usual places. Your code then needs to be wrapped in a PrivilegedAction block.
Mapping Data to Fields: Java API with XML abstraction, call JDBC from Javascript, Java Persistance API, dojo storage abstraction,
Essentially: SQL results -> XML -> js into DOM fields.
You can even run this off of a USB stick. The derby format is portable and can be ready by any derby.jar. You can encrypt databases for security.
Alternate solutions: mozStorage (browser specific, intended for internal Mozilla use), dojo.storage (with Flash)
Future Directions: Web server in browser, Synchronization, Implement a dojo StorageProvider.
-Bill
Tags: OSCON06
Tuesday, July 25, 2006
OSCON 06 Day 2 - Afternoon Tutorials
John Paul Ashenfelter - Rock-solid Web Development: Testing Web Apps
Take-home lesson #1 : Testing gives you confidence in your code and application.
Basic of Testing
Hierarchy of testing:
1. None.
2. Ad hoc testing - depends on people, not reproducible.
3. Unit testing
4. Bodies - help desk, users, managers.. have people beat on it.
5. Bodies + test plan - directs people on what to test.
6. Automated test plans.- gives the computer the boring, tedious parts.
Types of Testing: Low Level Code
Low-level testing done by developers to make sure an object behaves the way the spec says it should.
Types of Testing: Application Level Testing (also Functional or Integrated Testing)
Done by non-dev people... QA and UA types.
Also may involve automated testing.
Examples are browser interactions.
Types of Testing: System Level
Includes Load, Performance and Stress tests.
Types of Testing: User Level
Testing the stories from the cards - usability and acceptance testing.
See also: conformance, security, and failover testing.
Who does testing: You. Dev team, QA team, Help Desk. Not users / customers.
Take-home lesson #2: Do not let your users do your testing.
Getting started? If you're starting from ground zero (lotsa code, no tests) you can add tests as you write new code, add tests that demonstrate reported bugs, and add test instead of clicking through the app yet again.
There's also (from http://use.perl.org/~amoore/journal/30215)
1) boiling the frog - start slowly
2) play ping-pong - write a test for some code that someone's else wrote
3) maybe a rachet? - keep improving, and make sure you keep bettering your test standards.
Take-home lesson #3: Good programmer write tests.
Functional Testing with Selenium
Easy to use, runs in many browsers, exposes browser specific issues, straightforward to automate.
Speaking seleneese: you build tests in html as 3-column tables. The command language has Actions, Accessors, and Assertions. And there are locators and patterns.
Actions: anything the user can do, there's a selenium action for.
Accessors: examine the state of browser/application, usu storeSomething each has several (often 6) related assertions.
Assertions:
assertSomething - aborts on a failure
verifySomething logs failures and continues
waitForSomething waits until a timeout or condition (ajax)
There are also the inverse of all these.
Seleneese locations: it can find things by id, name, identifier, link, dom, and xpath.
Basic test structure is a html 3-column table: command | argument | argument, where the second argument is often blank. The first line is often a comment.
There's also an IDE wich can record sessions show test, and allow editing of existing / recorded tests.
Where this gets cool is that you can have something (Java, Rails, perl, etc.) generate the 3-column HTML that is your test. Doing this, you can build test that contain decisions, loops, and database referencing.
Take-home lesson #4: Selenium will save you time.
Continuous Integration and Automation
Integration with Cruise Control and your testing is a good and worthwhile thing
to do.
Use dbUnit to set up your database before tests and reset it when you're
finished. It's also useful for testing stored procedures and the like. You
know, the smarts you put into db code.
Take-home lesson #5: If you repeat it, automate it.
Load testing with Grinder
Load tests - how does the app behave with more than one user using it.
Performance tests - load it and measure performance metrics at typical usage
levels.
Stress tests - crank up the load until it breaks.
The Grinder (Use V3 and not V2. The Beta label is bogus and is really only
related to documentation):
Agent: that run the tests
Console: coordinate the tests in a central location
TCPProxy: records the test using a browser.
Personal note: Grinder does indeed rock - we've used it for some load-related
troubleshooting. But I realize we could be using it for lots more.
Talk slides are at http://www.ashenfelter.com/ or http://transitionpoint.com/
Apologies for this one being so erratic and unedited. This is pretty raw dump of my notes from the session.
-Bill
Tags: OSCON06, testing
Take-home lesson #1 : Testing gives you confidence in your code and application.
Basic of Testing
Hierarchy of testing:
1. None.
2. Ad hoc testing - depends on people, not reproducible.
3. Unit testing
4. Bodies - help desk, users, managers.. have people beat on it.
5. Bodies + test plan - directs people on what to test.
6. Automated test plans.- gives the computer the boring, tedious parts.
Types of Testing: Low Level Code
Low-level testing done by developers to make sure an object behaves the way the spec says it should.
- Done by programmers
- Unit test are best examples
- Specific functionality testing in isolation.
Types of Testing: Application Level Testing (also Functional or Integrated Testing)
Done by non-dev people... QA and UA types.
Also may involve automated testing.
Examples are browser interactions.
Types of Testing: System Level
Includes Load, Performance and Stress tests.
Types of Testing: User Level
Testing the stories from the cards - usability and acceptance testing.
See also: conformance, security, and failover testing.
Who does testing: You. Dev team, QA team, Help Desk. Not users / customers.
Take-home lesson #2: Do not let your users do your testing.
Getting started? If you're starting from ground zero (lotsa code, no tests) you can add tests as you write new code, add tests that demonstrate reported bugs, and add test instead of clicking through the app yet again.
There's also (from http://use.perl.org/~amoore/journal/30215)
1) boiling the frog - start slowly
2) play ping-pong - write a test for some code that someone's else wrote
3) maybe a rachet? - keep improving, and make sure you keep bettering your test standards.
Take-home lesson #3: Good programmer write tests.
Functional Testing with Selenium
Easy to use, runs in many browsers, exposes browser specific issues, straightforward to automate.
Speaking seleneese: you build tests in html as 3-column tables. The command language has Actions, Accessors, and Assertions. And there are locators and patterns.
Actions: anything the user can do, there's a selenium action for.
Accessors: examine the state of browser/application, usu storeSomething each has several (often 6) related assertions.
Assertions:
assertSomething - aborts on a failure
verifySomething logs failures and continues
waitForSomething waits until a timeout or condition (ajax)
There are also the inverse of all these.
Seleneese locations: it can find things by id, name, identifier, link, dom, and xpath.
Basic test structure is a html 3-column table: command | argument | argument, where the second argument is often blank. The first line is often a comment.
There's also an IDE wich can record sessions show test, and allow editing of existing / recorded tests.
Where this gets cool is that you can have something (Java, Rails, perl, etc.) generate the 3-column HTML that is your test. Doing this, you can build test that contain decisions, loops, and database referencing.
Take-home lesson #4: Selenium will save you time.
Continuous Integration and Automation
Integration with Cruise Control and your testing is a good and worthwhile thing
to do.
Use dbUnit to set up your database before tests and reset it when you're
finished. It's also useful for testing stored procedures and the like. You
know, the smarts you put into db code.
Take-home lesson #5: If you repeat it, automate it.
Load testing with Grinder
Load tests - how does the app behave with more than one user using it.
Performance tests - load it and measure performance metrics at typical usage
levels.
Stress tests - crank up the load until it breaks.
The Grinder (Use V3 and not V2. The Beta label is bogus and is really only
related to documentation):
Agent: that run the tests
Console: coordinate the tests in a central location
TCPProxy: records the test using a browser.
Personal note: Grinder does indeed rock - we've used it for some load-related
troubleshooting. But I realize we could be using it for lots more.
Talk slides are at http://www.ashenfelter.com/ or http://transitionpoint.com/
Apologies for this one being so erratic and unedited. This is pretty raw dump of my notes from the session.
-Bill
Tags: OSCON06, testing
OSCON 06 Day 2 - Morning Tutorials
Stuart Halloway - Ajax on Rails
Stuart's smart and articulate, not to mention that he's a hometown boy.
Many of the recent successful companies (google, yahoo...) didn't come from vendor-supplied solutions. They evolved on Open Source, by having a good idea, being early to the solution space, and pursuing it tenaciously. AJAX (and likely on Ruby on Rails) is a great enabler for
97% of AJAX traffic on the web is html. But he lies. He pulls number out of his, erm.... thumb.
Prototype, the Library
It provides low level support for dynamic web apps, hiding browser oddities. It's used by Scriptaculous and Rico, and was driven and inspired by Ruby on Rails - the method names in it a RoRish already. It does xhr completely and provides some JS extensions. It does a bit of DOM and CSS/Behavior stuff, and in Stuart's model, it does 'view-centric' ajax (sending mostly html to the client).
Next we demo'd a basic type-ahead search app (think GoogleSuggest). There is lots of space for compromise, from doing full ajax (lots of server interaction) to pulling all results to the browsers and filtering things entirely on the client in js; there's all sorts of space for compromises in between. Doing ajax is about taking granularity from one page at time to much smaller mini-requests. For this demo, we talked through the Rails call to do this, then looked at the js that was generated as a result of that call.
Random tool aside: The Green Checkmark is the Firebug Firefox extension. Get it. Use it. It even has a Javascript debugger. Good mention too for the Web Developer toolbar.
The server side of the demo is in Rails; it does the database-y bit that does the search, then returns a rendered partial to the page with the results.
There are several useful XHR helper methods: link_to_remote, form_remote_tag, remote_form_for, observe_field, observe_form, submit_to_remote. These do a lot of common, useful, helperish stuff that you probably want to already be doing anyway.
Degradable Ajax
It's pretty straightforward to have an Ajax app behave like a conventional web1.0 app, albeit without the fancy typeheads and puffs and shrinks. If the ajax widgets call url foo, and the form submit url is foo, the action behind foo can be smart enough to know whether the request came from an ajax control, in which case it returns a partial, or if the form submits, it processes and returns the entire page.
Scriptaculous
Scriptaculous is a effects and widget library that builds on Prototype. It, too, is very Ruby- and Rails-ish in it's naming and parameters. The next demo and code review was a autocomplete field with Scriptaculous. The nextnext demo is a scriptaculous drag-and-drop demo. These libraries make it so darn easy those things that used to be such a pain.
RJS
All the code so far (and both libraries) falls flat at changing more than one section of the page at once. RJS does this, and it does it by sending js back to the browser to be executed. To get started do rake rail:update:javascripts. Add javascript_include_tag :defaults in your template page to get all the right libraries in your page. When you call something like this:
rather than the obvious-looking calls back to the client. It generate javascript and sends it back to the client to make these things happen.
Streamlined
Stuart, after doing DHH's 7 minute rails demo in 3 minutes, regenerated and threw the same screens up using Streamlined, his companies gee-whiz ajaxy rails generator. It's great. It's beautiful. It's wicked easy. And they're officially launching it here at OSCON (tomorrow, I think). What Stuart demo'd was just a little tiny taste of what it can do (as witnessed at RailsConf), and he'll going to floor everyone tomorrow at the conference session.
Random tool aside #2: JavasScript Shell another useful Firefox extension. It lets you run arbitrary js against your page on the fly.
More on Protype
At a more raw javascript level, the prototype library helps make js behave very much like a real oo language, and helps smooth over the difference between different vendors browsers. Getting down to js at this level, while painful at time, is necessary to work around certain kinds of problems. Prototype's usefulness shouldn't be underestimated. It's not just for visual DOMish javascript, but it adds considerable Object (big O) support to javascript.
Technologies from this talk to look more at:
Stuart's presentation slides and sample code is available at codecite.com
-Bill
Tags: OSCON06, Ajax, RubyOnRails, Javascript
Stuart's smart and articulate, not to mention that he's a hometown boy.
Many of the recent successful companies (google, yahoo...) didn't come from vendor-supplied solutions. They evolved on Open Source, by having a good idea, being early to the solution space, and pursuing it tenaciously. AJAX (and likely on Ruby on Rails) is a great enabler for
97% of AJAX traffic on the web is html. But he lies. He pulls number out of his, erm.... thumb.
Prototype, the Library
It provides low level support for dynamic web apps, hiding browser oddities. It's used by Scriptaculous and Rico, and was driven and inspired by Ruby on Rails - the method names in it a RoRish already. It does xhr completely and provides some JS extensions. It does a bit of DOM and CSS/Behavior stuff, and in Stuart's model, it does 'view-centric' ajax (sending mostly html to the client).
Next we demo'd a basic type-ahead search app (think GoogleSuggest). There is lots of space for compromise, from doing full ajax (lots of server interaction) to pulling all results to the browsers and filtering things entirely on the client in js; there's all sorts of space for compromises in between. Doing ajax is about taking granularity from one page at time to much smaller mini-requests. For this demo, we talked through the Rails call to do this, then looked at the js that was generated as a result of that call.
Random tool aside: The Green Checkmark is the Firebug Firefox extension. Get it. Use it. It even has a Javascript debugger. Good mention too for the Web Developer toolbar.
The server side of the demo is in Rails; it does the database-y bit that does the search, then returns a rendered partial to the page with the results.
There are several useful XHR helper methods: link_to_remote, form_remote_tag, remote_form_for, observe_field, observe_form, submit_to_remote. These do a lot of common, useful, helperish stuff that you probably want to already be doing anyway.
Degradable Ajax
It's pretty straightforward to have an Ajax app behave like a conventional web1.0 app, albeit without the fancy typeheads and puffs and shrinks. If the ajax widgets call url foo, and the form submit url is foo, the action behind foo can be smart enough to know whether the request came from an ajax control, in which case it returns a partial, or if the form submits, it processes and returns the entire page.
Scriptaculous
Scriptaculous is a effects and widget library that builds on Prototype. It, too, is very Ruby- and Rails-ish in it's naming and parameters. The next demo and code review was a autocomplete field with Scriptaculous. The nextnext demo is a scriptaculous drag-and-drop demo. These libraries make it so darn easy those things that used to be such a pain.
RJS
All the code so far (and both libraries) falls flat at changing more than one section of the page at once. RJS does this, and it does it by sending js back to the browser to be executed. To get started do rake rail:update:javascripts. Add javascript_include_tag :defaults in your template page to get all the right libraries in your page. When you call something like this:
if @saved
page.visual_effect(:blind_up, 'model_form', :duration=>0.5)
page.replace_html 'model_error', 'Saved!'
else
page.replace_html 'model_error', error_messages_for('player')
end
page.delay(0.5) { page.redirect_to(:action=>'list') }
rather than the obvious-looking calls back to the client. It generate javascript and sends it back to the client to make these things happen.
Streamlined
Stuart, after doing DHH's 7 minute rails demo in 3 minutes, regenerated and threw the same screens up using Streamlined, his companies gee-whiz ajaxy rails generator. It's great. It's beautiful. It's wicked easy. And they're officially launching it here at OSCON (tomorrow, I think). What Stuart demo'd was just a little tiny taste of what it can do (as witnessed at RailsConf), and he'll going to floor everyone tomorrow at the conference session.
Random tool aside #2: JavasScript Shell another useful Firefox extension. It lets you run arbitrary js against your page on the fly.
More on Protype
At a more raw javascript level, the prototype library helps make js behave very much like a real oo language, and helps smooth over the difference between different vendors browsers. Getting down to js at this level, while painful at time, is necessary to work around certain kinds of problems. Prototype's usefulness shouldn't be underestimated. It's not just for visual DOMish javascript, but it adds considerable Object (big O) support to javascript.
Technologies from this talk to look more at:
- Streamlined - A fantastic ajax scaffold generator for rails.
- Prototype Window Class - a js library for doing windows.
- ARTS Testing Extension - good tool for testing rjs code
Stuart's presentation slides and sample code is available at codecite.com
-Bill
Tags: OSCON06, Ajax, RubyOnRails, Javascript
Monday, July 24, 2006
OSCON 06 Day 1 - Afternoon Tutorials
Mastering VIM - Damien Conway
Alright, I've been in the tutorial session 5 minutes and I've given up on taking coherent notes. He's so smart, and so fast, and jumps around and covers material so quickly. I strongly prefer vi to emacs (I usually never install it). Over time, I've drifted away from vi, (I'm taking notes in and am just getting used to KDE's Kate) thinking, "Oh, it's too hard to worry with... Kate (or whatever) is so much easier". Damien has proven me wrong, and has shown me so many easy, powerful things to do with vim, that I'm excited and curious to give it a another go.
My favorite bits were:
It was a great talk (a very difficult choice over the Asterix talk), and he's put together 50 tips - each a broad topic group of commands - in the handouts that are the real take-home goodies from this talk.
And a quick word on Damien: he's a great presenter. I got hooked on him at last year's OSCON, where he redefined Perl to work in Latin, with Roman numerals for results.
-Bill
Tags: OSCON06
Alright, I've been in the tutorial session 5 minutes and I've given up on taking coherent notes. He's so smart, and so fast, and jumps around and covers material so quickly. I strongly prefer vi to emacs (I usually never install it). Over time, I've drifted away from vi, (I'm taking notes in and am just getting used to KDE's Kate) thinking, "Oh, it's too hard to worry with... Kate (or whatever) is so much easier". Damien has proven me wrong, and has shown me so many easy, powerful things to do with vim, that I'm excited and curious to give it a another go.
My favorite bits were:
- Using do-it-yourself marks and vim's native marks to make jumping and editing easier.
- The idea of binding a key to the :nohilite to hide the search hilighting when you're done.
- Actually *getting* cut(d)/copy(y)/paste(p) by the vi keys (I always wimp and drop back to mouse/xwin click-drag-middleclick to do it)
- ]p to paste something at the current level of indentation
- Branched undo in vim7, producing the "Trousers of Time"
- Autocompletion!! (Well, I knew that it has always done it for filenames, but it can do it for vim commands and body text as well.) This gives us language-specific completions!
- You can use vim as a file browser!
- set autowrite - always save before quit
- :options|resize - browse and set all options, then you can :mkvimrec to make a .vimrc to save all your current settings.
- set shiftround
- Damien's Total Tabular Control function. Very nice.
- Automatic, self-cleaning backup files.
- Visual Block Mode - swoon.
- vipJ - Join all the lines in a paragraph together.
- Abbreviations, tho they're not as good as insertion maps because they don't need the extra space after. This can be good and bad.
- :nmap <Space> <something useful> - Make normal-mode space do something useful.
It was a great talk (a very difficult choice over the Asterix talk), and he's put together 50 tips - each a broad topic group of commands - in the handouts that are the real take-home goodies from this talk.
And a quick word on Damien: he's a great presenter. I got hooked on him at last year's OSCON, where he redefined Perl to work in Latin, with Roman numerals for results.
-Bill
Tags: OSCON06
OSCON 06 Day 1 - Morning Tutorials
Scalable Internet Architectures - Theo Schlossnagle
Intro and Useful Points - (I'll backfill this intro as soon as I make it to my notes...)
Practicals - The problem: Scalable static image serving - eg. an in-house Ikami. We looked at a vendor solution, +/- of it. Next we did a build-your-own, looking at each of the pieces, and scaling the solution bigger and smaller. We talked through configuring each of the pieces, including a cool look at how to decide which image server is closest to the user geographically. By using multiple DNS servers, one near each image serving cluster. Things should converge... but they don't. Routes change too quickly for this to work. This does work if you use DNS Shared IP (AnyCast). All 3 servers all claim to serve the same IP - you'll always get to the closest server, with no convergence time. You can use a similar technique (if you own enough of the right, big, colocated parts) to make your system DDOS-resistant.
Logging - Again, we started with a hypothetical configuration and defined goals = multiple servers, real-time log analysis and reaction. Next he covered (dis)advantages of distributed logging, passive logging (sniffing), leading us to multicast logging. (JMS == perfect multicast logger at the app level) You can add multiple subscribers and loggers on the fly. This enables active and passive monitors, special purpose analyzers, and write-to-disk tasks.
Caching Architectures - caching can benefit application performance (and perceived performance) greatly. Theo covered layered cache, integrated cache (in app), data cache (in data store), write-thru cache. As before, he created an example case with example system configuration and goals of the exercise. In our sample bloggy-newsy app, when an article request comes in, we check to see if that page exists on the server, and if not, we fetch it from the db and create the page locally, so it exists for all future requests - much like we already do at Duke with patient discharge dates.
Tiered architectures - Theo started with a background on tiered design. Then he evolved into a worthy rant against traditional tiering. (Expensive people and $$-wise, hard to predict need and scale up/down.) A healthy replication system that allows any type of machine (web server) to answer any request. Scaling then just requires adding (or removing) like servers as needed. This sounds like the like of stuff that mongrel enables. Need more capability? Just stack it on like lego bricks.
Database Replication (part of Tiering) - It's a hard thing to deal with and do well. This is not warm-(or cold-) failover - this is true db replication (clustering). Multimaster replication is a log way off. Master-slave stuff is ready to use now. He made a mention of multi-vendor database replication - using mysql to cache and crunch 100 distributed instances, all dumping their data to a single Oracle backend.
The Right Tool for the Job - aka. How to Do it Wrong For our previous sample newsy-bloggy app, the customer wants a "which 30 users last loaded this page" and "which pages did a user load in the last 30 minutes". He talked through a mysql implementation of this, and how poorly it scales and how ludicrous the implementation would be. Next we created a custom app (a Skiplist structure in C) that hooks the logging stream (from above) and crunches it on the fly.
Q&A Session
Q: File systems?
A: There's GFS (at a cost)... Lustre is good. None are perfect, avoid them all if you can.
Q: Does spread have a performance cost?
A: No, negligible.
Q: I have huge sessions that resist distribution.
A: Go optimize them, make it smaller or compress it. Another option is to subdivide the session among different urls, and only get the big/expensive ones when you need them.
Q: Massive databases, how do you scale them?
A: Most obvious way is to federate (subdivide by geography/age/etc) to different databases.
Technologies from this talk to look more at:
Tags: OSCON06
Intro and Useful Points - (I'll backfill this intro as soon as I make it to my notes...)
Practicals - The problem: Scalable static image serving - eg. an in-house Ikami. We looked at a vendor solution, +/- of it. Next we did a build-your-own, looking at each of the pieces, and scaling the solution bigger and smaller. We talked through configuring each of the pieces, including a cool look at how to decide which image server is closest to the user geographically. By using multiple DNS servers, one near each image serving cluster. Things should converge... but they don't. Routes change too quickly for this to work. This does work if you use DNS Shared IP (AnyCast). All 3 servers all claim to serve the same IP - you'll always get to the closest server, with no convergence time. You can use a similar technique (if you own enough of the right, big, colocated parts) to make your system DDOS-resistant.
Logging - Again, we started with a hypothetical configuration and defined goals = multiple servers, real-time log analysis and reaction. Next he covered (dis)advantages of distributed logging, passive logging (sniffing), leading us to multicast logging. (JMS == perfect multicast logger at the app level) You can add multiple subscribers and loggers on the fly. This enables active and passive monitors, special purpose analyzers, and write-to-disk tasks.
Caching Architectures - caching can benefit application performance (and perceived performance) greatly. Theo covered layered cache, integrated cache (in app), data cache (in data store), write-thru cache. As before, he created an example case with example system configuration and goals of the exercise. In our sample bloggy-newsy app, when an article request comes in, we check to see if that page exists on the server, and if not, we fetch it from the db and create the page locally, so it exists for all future requests - much like we already do at Duke with patient discharge dates.
Tiered architectures - Theo started with a background on tiered design. Then he evolved into a worthy rant against traditional tiering. (Expensive people and $$-wise, hard to predict need and scale up/down.) A healthy replication system that allows any type of machine (web server) to answer any request. Scaling then just requires adding (or removing) like servers as needed. This sounds like the like of stuff that mongrel enables. Need more capability? Just stack it on like lego bricks.
Database Replication (part of Tiering) - It's a hard thing to deal with and do well. This is not warm-(or cold-) failover - this is true db replication (clustering). Multimaster replication is a log way off. Master-slave stuff is ready to use now. He made a mention of multi-vendor database replication - using mysql to cache and crunch 100 distributed instances, all dumping their data to a single Oracle backend.
The Right Tool for the Job - aka. How to Do it Wrong For our previous sample newsy-bloggy app, the customer wants a "which 30 users last loaded this page" and "which pages did a user load in the last 30 minutes". He talked through a mysql implementation of this, and how poorly it scales and how ludicrous the implementation would be. Next we created a custom app (a Skiplist structure in C) that hooks the logging stream (from above) and crunches it on the fly.
Q&A Session
Q: File systems?
A: There's GFS (at a cost)... Lustre is good. None are perfect, avoid them all if you can.
Q: Does spread have a performance cost?
A: No, negligible.
Q: I have huge sessions that resist distribution.
A: Go optimize them, make it smaller or compress it. Another option is to subdivide the session among different urls, and only get the big/expensive ones when you need them.
Q: Massive databases, how do you scale them?
A: Most obvious way is to federate (subdivide by geography/age/etc) to different databases.
Technologies from this talk to look more at:
- spread - a group message service
- whackamole - provides load-balancing and failover
- mod_log_spread -
- spreadlogd
- Splash - distributed sessions for clusters of web servers
Tags: OSCON06
Saturday, July 22, 2006
Welcome!
I've had this blog for a long while and have yet to post anything to it, intending for this to be my "professional" blog (counter to my thought-of-the-moment personal blog), with all my geeky inspirational wisdom, and the right time never seems to come. Last month it was very close, with RailsConf '06 in Chicago, but the network support for the conference and the hotel were laughable, and my blogging urges were thwarted yet again.
Now, I'm on my way to Portland, Oregon for OSCON '06. I came last year and had a blast. I got to see DHH do his Rails intro in person, I got to hear Why do a show, I got to ride a Segway, I got to meet a fair number of cool, geeky folks, and I got exposed to all sorts of wonderful, cool, nerdly ideas than I never would have if I hadn't come.
This year, I'm going without my wife. While it was great exploring Portland and the surrounding environs with her, I think I missed out on some fun after-hours social events. This year (for better or worse....) I get to be as geeky as I want to be, 24x5. I'm really curious what kind of ideas I'm going to end up with by the end of this.
So, welcome to my professional blog. I'm looking forward to sharing my experiences as a software development team lead, an explorer of new technologies, a Ruby on Rails enthusiast, and as a "veteran" software developer.
Now, I'm on my way to Portland, Oregon for OSCON '06. I came last year and had a blast. I got to see DHH do his Rails intro in person, I got to hear Why do a show, I got to ride a Segway, I got to meet a fair number of cool, geeky folks, and I got exposed to all sorts of wonderful, cool, nerdly ideas than I never would have if I hadn't come.
This year, I'm going without my wife. While it was great exploring Portland and the surrounding environs with her, I think I missed out on some fun after-hours social events. This year (for better or worse....) I get to be as geeky as I want to be, 24x5. I'm really curious what kind of ideas I'm going to end up with by the end of this.
So, welcome to my professional blog. I'm looking forward to sharing my experiences as a software development team lead, an explorer of new technologies, a Ruby on Rails enthusiast, and as a "veteran" software developer.
Subscribe to:
Posts (Atom)