Spark::red 2011 Review and 2012 Preview

2011 has been an amazing year for Spark::red ATG Oracle Commerce Hosting.

  • We’ve added several new clients (including a well known member of the Fortune 1000!)
  • We’ve added 101 new dedicated servers and over 20 cloud computing instances
  • We’ve opened an office in Boston
  • We’ve earned our PCI Level 1 MSP Certification
  • We’ve hired several employees, a large number of contractors, and even picked up an intern!
  • We’ve gone from 3 data centers in the USA to 13 data centers and 16 points-of-presence including several international facilities
  • While our competitors have raised prices, we’ve managed to keep prices the same or actually reduce them in many places.  All this while providing newer more powerful hardware across the board
  • We’ve introduced a new Oracle ATG Commerce Standard Hosting Package which provides excellent performance, stability, and security, for a very affordable package price
  • Served well over 200 TB of content
  • Partnered with JBoss/Redhat, Oracle, Akamai, Keynote, Knowledge Path, and many other industry leaders in order to provide the absolute best hosting, support, and related services
  • More more!
We’ve grown a lot in 2011 in every measurable dimension.  Clients, revenue, employees, contractors, servers, bandwidth, offices, processes, every aspect of the business has been growing nicely.
2012 is looking like it will be even bigger!  We have lots of prospective clients, two new international data centers opening in Australia and South America, international clients, new hires, a site redesign, a big sales and marketing push, new free ATG modules and open source code, and lots more!
I’m very excited about the upcoming year and what it will bring.  If you’d like to talk to us about what we can do to help you in 2012 give us a call or email us about your ATG Hosting needs!

JBoss JMS Doesn’t Create Tables with XA Datasource

The JBoss Messaging service (at least on JBoss 4.3 EAP) defaults to using a local Hypersonic database. For production use you’ll want to switch away from Hypersonic to a real database, such as Oracle (in this example).

If you’re using XA datasources in general, it’s tempting to go ahead and create the new DefaultDS datasource definition as an XA datasource (like the example one jboss-eap-4.3/docs/examples/jca/oracle-xa-ds.xml ). However, I’ve just discovered that if you do that the JMS startup service won’t successfully create the tables it needs. The HILOSEQUENCES and TIMERS tables get created by the UUID key generator service, but the JMS table creation silently fails and then you get errors like this:

11:16:32,161 ERROR [ExceptionUtil] ServerPeer[0] startService
java.sql.SQLException: ORA-00942: table or view does not exist

Switch the DefaultDS definition to a non-XA version, and it will create all of the JBM_* tables successfully.

Rant About Core-Based Licensing

This is a copy of a small rant I just posted on the ATG_Tech Google Group.

Please note that ATG isn’t the only company doing this, Oracle does it, as do many others. I just think that it’s wrong:)

If you draw a graph showing processing power against software license cost for the same software module, over time, you’d see a steady increase of processing power, a pretty flat cost line for years and years, and then once multi-core hit the server market, you’d see a huge jump in cost, without a significant change in the climb of performance.

——–
I think this licensing model is a huge mistake for the customers.

CPU manufacturers changed course from developing faster and faster chips, to developing more and more cores on a given chip at lower clock speeds. The reason is that it’s easier, cost-wise and silicon
manufacturing yield-wise, to add cores, and rely on the OS and applications to make use of the multi-cores. So ideally, the end user sitting in front of their computer will see a similar level of
performance increase as chips go wider, as they would have had chip manufacturer continued the megahertz wars. While at the same time, the cost of that increased performance to the chip manufacturers is less. (also there was an approaching barrier of how low you can shrink the die size without moving to a whole other base material, and power dissipation issues).

Intel released their 2.2 GHz Pentium in January of 2002. Current Intel dual and quad core processors don’t really exceed 3.0 GHz, and many new chips are still being released in 2.2, 2.4, 2.6 GHz core
speeds. So in 6 years, based on an 18-month Moore’s Law cycle (yes, I know Moore’s Law is about transistor density not computation speed, but for the sake of estimation, it’s pretty close to how the industry was progressing with clock speed before the shift to multi-cores), in the alternate universe of single core chips, we’d expect to see 35.2 GHz chips. With a 24 months cycle it would be 17.6 GHz. At least I think that’s how the math works. At any rate, our current multi-core processors don’t provide any additional performance based on the performance per CPU we should expect at the current time, based on the history of CPU performance increases.

The problem here is that now ATG (and other per-core licensing products) customer are paying up to 2X more money in licensing cost for very similar (if not the same) levels of performance than if they CPUs had just gotten faster.

You could make the case that for the work of handing request threads, two 3.0 GHz cores perform a bit better than a single 6.0 GHz (or 17.4 GHz) core, but honestly it’s really hard to say, since we don’t have 6.0 GHz cores readily available to test ATG on. I’d be VERY surprised if the performance was more than 10% different. And yet we have to pay far more for it.

Customers of ATG/Oracle/etc… are being penalized for the (legitimate) decisions of Intel and AMD.
——-

Oracle Export (exp) and Initial Extent Size Issues

If you have a large database in Oracle, with a tablespace with say 2 gigabytes worth of data in it, and you then go in a delete a large number of rows from a large number of tables, and shrink it down to about 300 megabytes worth of data, and then you create an Oracle export using exp, you might expect you could then import this Oracle dump file, into another database, and have it take up 300 MB.

You’d be wrong.

The dump file ends up with all of the create table and create index commands using an INITIAL extent storage setting based on the size of the old table at its fullest. So when you run the import of the dump file, it basically eats up 2 gigabytes of tablespace for 300 MB of data. You can’t edit the INITIAL values in the dump file, since it’s binary, and if you edit it, you corrupt it. Oracle doesn’t seem to have any great ways to fix this, so here’s my hack:

  1. Do the full export, with compress=n (this is useful regardless).
  2. Generate a create tables script (I used my SQLDeveloper GUI client) that just creates the tables (no INITIAL settings)
  3. Generate a create constraints script (I used my SQLDeveloper GUI client) that just creates constraints/indexes
  4. Run the create tables script on the new database
  5. Run the import with these options: ignore=yes constraints=no indexes=no
  6. Run the create constraints script

Now you have a 300 MB database. If you export from this, you end up with an export file that will create other 300 MB databases and you can share it with your friends.

Good luck!

P.S. Oracle DBAs might have a better way of doing this. I don’t know.