oracle

/Tag: oracle

Oracle PL/SQL Loop to Update Fields with Periodic Commits

Have you ever had one of those moments where you’ve just completed importing 1.7 million user records from flat files into an normalized set of ATG Personalization tables, plus some custom tables, and you only then realize that the passwords from the old system, while in the compatible md5sum format, are all UPPERCASE, while ATG uses all lowercase hashes?

It’s not a great moment.

Initially I was stuck in an ATG specific mind-set. I figured the two possible solutions were as follows:

  1. Use the already sub-classed ProfileFormHandler to override the out of the box password comparison and do an uppercase on the hash value before checking it against the database. Also update the create and update related methods as well to uppercase the hash.
  2. Drop the 1.7 million records. Update the importer tool to lowercase the incoming md5 hash before persisting it.

The issue with option 1 is that it means overriding the perfectly good out of the box behavior of ATG Profiles for the life of the application due to a set of awkward initial data. I’m happy to override ATG behavior when I should, but this just felt wrong.

The issue with option 2 is that it would take approx 2-3 days. The import process was slow and had to be done in small chunks for reasons I won’t get into now.

It wasn’t until after sleeping on it that I realized I had put myself into the ATG box unnecessarily. Let’s just fix the data in situ. I didn’t know much PL/SQL but some Googling (I love how that’s a common verb now, btw) led me to write up this:

declare i number := 0;
begin
  for r in (select id from dps_user)
  loop
    update dps_user
       set password = lower(password)
     where id = r.id;
  	i := i+1;
	if mod(i, 10000) = 0 THEN    -- Commit every 10000 records
	      COMMIT;
	end if;
  end loop;
commit;
end;

I’m no Oracle DBA, but it works!

Essentially it loops through every row in the dps_user table, and replaces the password with an all-lowercase version of itself. Every 10,000 records it does a commit as the full 1.7 million records would overwhelm the undo tablespace.

Enjoy!

Why Oracle makes me nuts

When left to my own devices I tend to use PostgreSQL or MySQL. They’re both free, open source, run on every platform I use, and I understand how to use them, tune them, etc…. However, when I’m building an application and database scripts that will be deployed on Oracle, and using an application which doesn’t really support PostgreSQL or MySQL for production anyhow, I will use Oracle. I don’t have much against Oracle really, mostly I just don’t understand it’s inner workings very well, and usually I only deal with it when it breaks.

Installing Oracle on Mac OS X has been possible for a while, but has been FAR from easy. More often than not the installers would throw fatal errors and die horrible deaths. However, you can now get a pre-packaged Oracle 10 XE installation running on Linux as a downloadable Parallels image.

http://ptn.parallels.com/en/ptn/desc/?VA_id=169

Which takes a lot of the pain away. Just download, run, and poof you have a running Oracle 10 instance with a web admin, and you can plug your SQLDeveloper, JBoss, ATG, etc… into it.

Until it starts throwing errors. Not Oracle. Oracle just sits there happily, but your applications start blowing up. Not at first, but when you start trying to DO stuff. Like say running a bunch of install scripts, and it starts failing 1/2 way through. I finally tracked it down as a known issue, mentioned on a few blogs like this one:

http://it.newinstance.it/2007/06/01/ora-12519-tnsno-appropriate-service-handler-found/

So there’s a hack/fix at least, but come on Oracle! When your database falls over faster than any open source database you have issues.

This isn’t an “I hate Oracle” rant. When it comes to huge production systems I’m not 100% sure I’d pick anything other than Oracle on Veritas. But I just wish they made it easier to use all around. Has anyone seen a very serious production system using PostgreSQL? I’m sure they exist, but I haven’t seen one. By serious I mean 8,000+ simultaneous jdbc connection, tens thousands of transactions/minutes, tables with 10+ million rows, and queries that do multi table joins across tables of that size, and tested disk, hardware, and server failover. I’m not doubting that it can be done, but who’s doing it?