oracle

/Tag: oracle

Oracle Commerce Cloud – The Good, Bad, and Ugly

What is Oracle Commerce Cloud?

Oracle has been working on their Oracle Commerce Cloud offering for a while, pre-released first in the early Spring of 2015, and publicly launched in June 2015.  It was the focus of the Commerce section of the recent Oracle CX 2016 conference in  April 2016.

The Commerce Cloud offering is a SaaS eCommerce platform based on the industry leading ATG eCommerce platform Oracle acquired when it purchased ATG in late 2010.

The Good

I absolutely understand why Oracle should be building a SaaS eCommerce solution.  There needs to be a solid Oracle solution focused on the traditional mid-market of retailers.   ATG Oracle Commerce (On Premise) is not necessarily a great fit for many mid-market retailers.  The platform’s strong suits of scalability and flexibility are often more than a retailer doing 20-75 million dollars annually on their online channel needs, and both strengths come with costs: implementation and on-going development on a complex platform (rather than a simple storefront application) and hosting and 24×7 support for the infrastructure and application suite stack.

Currently the mid-market eCommerce market is dominated by Magento (On Prem and SaaS options) and DemandWare (Saas).  Both of which offer relatively quick and cheap implementations and on-going costs (when compared to Oracle Commerce) for mid-market customers, and often provide enough features and customizability to meet the needs of that market sector.  Both Magento and DemandWare are maturing and pushing further up-market, taking on larger and larger clients with higher traffic and higher revenue websites.

eCommerce Platforms Fit by Revenue including Oracle Cloud Commerce

It makes perfect sense for Oracle to push downward into the mid-market, to compete against DemandWare and Magento, not only to protect against marketshare erosion by the encroaching mid-market players, but also to provide what no one else really has: a smooth growth path for companies who’s eCommerce channels will grow from “mid-market” to “enterprise”, or who’s growth will necessitate features and customizations unavailable on traditional mid-market solutions.

Oracle has a lot of strong assets which should make it possible to deliver a solid mid-market SaaS eCommerce offering and grow marketshare in that ecosystem.  They have a world class mature eCommerce platform in ATG Oracle Commerce, they have development, hosting, sales, and marketing assets which are virtually unparalleled.

The Bad

Unfortunately the Oracle Commerce Cloud solution isn’t a competitive offering for the mid-market yet.  Even after having been live for a year, the feature-set is woefully limited, delivering far less functionality than even the free version of Magento.  The vast majority of the ATG Oracle Commerce features are unavailable, or badly hamstrung.

I’ve been told, although I have no way to confirm this myself, that the Oracle Commerce Cloud solution is built not on the current ATG Oracle Commerce platform (version 11.2), but instead is based on version 10.2, released over three years ago.

The biggest “Cloud” SaaS advantage, effortless auto-scaling, doesn’t exist for Oracle Commerce Cloud.  Clients have to provide traffic numbers ahead of time, and Oracle doesn’t take responsibility for performance or availability problems if traffic exceeds those estimates.  The “Cloud” in Commerce Cloud isn’t what you’d expect.

Every client and system integrator I have spoken with who has hands-on Oracle Commerce Cloud experience has complained about the serious limitations, lack of features, broken functionality, and lack of extensibility of the platform in its current state.

The UI is pretty, but is not well suited to managing large catalogs.  Promotions are limited to only four basic types and don’t support common scenarios such as BOGO or category or brand driven multi-item purchase discounts.  There’s no support for standard payment types such as Store Credit or Gift Cards.  Error messages in the admin are unhelpful things like “Error 20,000”.  The promised integrations with Oracle retail products like CPQ don’t exist yet.  There’s no hierarchy or inheritance available for custom product types.  There are performance issues with large catalogs and with large numbers of custom product or SKU attributes.  There are significant FEO and end user performance problems.  Etc…

Oracle Commerce Cloud just isn’t ready to compete with Magento and DemandWare.  I truly wish it was, and I hope that it will achieve feature parity with the incumbent offerings soon.  There is an aggressive roadmap for the Oracle Commerce Cloud offering, however there haven’t been many (any?) significant changes in what is being shown in demos between the initial release a year ago, and the Oracle CX 2016 conference a few weeks ago.

Oracle has certainly committed to the “Cloud” approach.  At an enterprise level, Oracle has Cloud Fever.  Cloud is the new focus across all their products and services.   In the eCommerce universe Oracle is focusing all their sales and marketing efforts behind the Oracle Commerce Cloud solution, so my hope is that the engineering resources brought to bear will be able to really challenge Magento and DemandWare in the SMB and mid-market.

The Ugly

The biggest issue that I have with the Oracle Commerce Cloud is that Oracle is selling it to EVERYONE, mid-market, enterprise, and existing Oracle Commerce customers.  They are aggressively pitching it to clients where it is clearly NOT a good fit, and can’t possibly support the features and integrations the client needs.  The sales team is being strongly motivated to sell Commerce Cloud instead of “On Prem” as they get virtually no commission or quota progress for the much more mature and comprehensive ATG Oracle Commerce offering.

While there are some opportunities where Oracle Commerce Cloud is a good solution, at this point in time it’s much more likely that the client will discover the product does not actually meet their needs and will be unhappy, or they will realize that upfront and Oracle will lose the deal to DemandWare/Hybris/Magento instead.

Another major problem is that Oracle is publicly talking about Commerce Cloud 100% of the time.  The roadmap for that solution is detailed and expansive.  In contrast the core ATG Oracle Commerce platform, which has been adopted as the core of eCommerce for hundreds of major global companies, has virtually no roadmap, and appears to be getting zero attention from Oracle (Sales, Marketing, Engineering).  At the Oracle CX 2016 conference, there was no content about Oracle Commerce, everything was focused on Commerce Cloud, even though most of the attendees focused on the eCommerce tracks were existing customers who have massive investments in their On Prem ATG Oracle Commerce platforms, and who couldn’t move to the Cloud solution (due to lack of features and integrations) even if they wanted to (which most do not).  These companies, as well as the ecosystem of Oracle Commerce system integrators, are feeling betrayed by Oracle, and are not seeing a viable future for the platform they have embraced so fully and invested in so heavily.

In my 18 years in the ATG ecosystem I have never heard so many long time ATG Oracle Commerce clients so angry, and talking so seriously about re-platforming on non-Oracle eCommerce solutions.  System Integrators who have been ATG shops since their creation are embracing Hybris and Magento because Oracle isn’t selling any new Oracle Commerce deals.  Oracle has a major damage control situation here, and I haven’t seen any sign that they realize it.

To be clear, I am a committed believer in the on-premise ATG Oracle Commerce platform.  It has the best technology, and is a great choice for large enterprise retailers (and others).  There are so many major companies who have fully invested in ATG Oracle Commerce, who could never move to a SaaS offering, that I feel there’s just no way for Oracle to pull the plug on such an important, and revenue generating, product offering.

Here’s hoping that Oracle’s “Cloud” frenzy doesn’t sink their leading position in the enterprise eCommerce market space, while they try to grab a little slice of the mid-market.

Further Reading:

Spark::red 2011 Review and 2012 Preview

2011 has been an amazing year for Spark::red ATG Oracle Commerce Hosting.

  • We’ve added several new clients (including a well known member of the Fortune 1000!)
  • We’ve added 101 new dedicated servers and over 20 cloud computing instances
  • We’ve opened an office in Boston
  • We’ve earned our PCI Level 1 MSP Certification
  • We’ve hired several employees, a large number of contractors, and even picked up an intern!
  • We’ve gone from 3 data centers in the USA to 13 data centers and 16 points-of-presence including several international facilities
  • While our competitors have raised prices, we’ve managed to keep prices the same or actually reduce them in many places.  All this while providing newer more powerful hardware across the board
  • We’ve introduced a new Oracle ATG Commerce Standard Hosting Package which provides excellent performance, stability, and security, for a very affordable package price
  • Served well over 200 TB of content
  • Partnered with JBoss/Redhat, Oracle, Akamai, Keynote, Knowledge Path, and many other industry leaders in order to provide the absolute best hosting, support, and related services
  • More more!
We’ve grown a lot in 2011 in every measurable dimension.  Clients, revenue, employees, contractors, servers, bandwidth, offices, processes, every aspect of the business has been growing nicely.
2012 is looking like it will be even bigger!  We have lots of prospective clients, two new international data centers opening in Australia and South America, international clients, new hires, a site redesign, a big sales and marketing push, new free ATG modules and open source code, and lots more!
I’m very excited about the upcoming year and what it will bring.  If you’d like to talk to us about what we can do to help you in 2012 give us a call or email us about your ATG Hosting needs!

JBoss JMS Doesn’t Create Tables with XA Datasource

The JBoss Messaging service (at least on JBoss 4.3 EAP) defaults to using a local Hypersonic database. For production use you’ll want to switch away from Hypersonic to a real database, such as Oracle (in this example).

If you’re using XA datasources in general, it’s tempting to go ahead and create the new DefaultDS datasource definition as an XA datasource (like the example one jboss-eap-4.3/docs/examples/jca/oracle-xa-ds.xml ). However, I’ve just discovered that if you do that the JMS startup service won’t successfully create the tables it needs. The HILOSEQUENCES and TIMERS tables get created by the UUID key generator service, but the JMS table creation silently fails and then you get errors like this:

[code] 11:16:32,161 ERROR [ExceptionUtil] ServerPeer[0] startService
java.sql.SQLException: ORA-00942: table or view does not exist[/code]

Switch the DefaultDS definition to a non-XA version, and it will create all of the JBM_* tables successfully.

Rant About Core-Based Licensing

This is a copy of a small rant I just posted on the ATG_Tech Google Group.

Please note that ATG isn’t the only company doing this, Oracle does it, as do many others. I just think that it’s wrong:)

If you draw a graph showing processing power against software license cost for the same software module, over time, you’d see a steady increase of processing power, a pretty flat cost line for years and years, and then once multi-core hit the server market, you’d see a huge jump in cost, without a significant change in the climb of performance.

——–
I think this licensing model is a huge mistake for the customers.

CPU manufacturers changed course from developing faster and faster chips, to developing more and more cores on a given chip at lower clock speeds. The reason is that it’s easier, cost-wise and silicon
manufacturing yield-wise, to add cores, and rely on the OS and applications to make use of the multi-cores. So ideally, the end user sitting in front of their computer will see a similar level of
performance increase as chips go wider, as they would have had chip manufacturer continued the megahertz wars. While at the same time, the cost of that increased performance to the chip manufacturers is less. (also there was an approaching barrier of how low you can shrink the die size without moving to a whole other base material, and power dissipation issues).

Intel released their 2.2 GHz Pentium in January of 2002. Current Intel dual and quad core processors don’t really exceed 3.0 GHz, and many new chips are still being released in 2.2, 2.4, 2.6 GHz core
speeds. So in 6 years, based on an 18-month Moore’s Law cycle (yes, I know Moore’s Law is about transistor density not computation speed, but for the sake of estimation, it’s pretty close to how the industry was progressing with clock speed before the shift to multi-cores), in the alternate universe of single core chips, we’d expect to see 35.2 GHz chips. With a 24 months cycle it would be 17.6 GHz. At least I think that’s how the math works. At any rate, our current multi-core processors don’t provide any additional performance based on the performance per CPU we should expect at the current time, based on the history of CPU performance increases.

The problem here is that now ATG (and other per-core licensing products) customer are paying up to 2X more money in licensing cost for very similar (if not the same) levels of performance than if they CPUs had just gotten faster.

You could make the case that for the work of handing request threads, two 3.0 GHz cores perform a bit better than a single 6.0 GHz (or 17.4 GHz) core, but honestly it’s really hard to say, since we don’t have 6.0 GHz cores readily available to test ATG on. I’d be VERY surprised if the performance was more than 10% different. And yet we have to pay far more for it.

Customers of ATG/Oracle/etc… are being penalized for the (legitimate) decisions of Intel and AMD.
——-

Oracle Export (exp) and Initial Extent Size Issues

If you have a large database in Oracle, with a tablespace with say 2 gigabytes worth of data in it, and you then go in a delete a large number of rows from a large number of tables, and shrink it down to about 300 megabytes worth of data, and then you create an Oracle export using exp, you might expect you could then import this Oracle dump file, into another database, and have it take up 300 MB.

You’d be wrong.

The dump file ends up with all of the create table and create index commands using an INITIAL extent storage setting based on the size of the old table at its fullest. So when you run the import of the dump file, it basically eats up 2 gigabytes of tablespace for 300 MB of data. You can’t edit the INITIAL values in the dump file, since it’s binary, and if you edit it, you corrupt it. Oracle doesn’t seem to have any great ways to fix this, so here’s my hack:

  1. Do the full export, with compress=n (this is useful regardless).
  2. Generate a create tables script (I used my SQLDeveloper GUI client) that just creates the tables (no INITIAL settings)
  3. Generate a create constraints script (I used my SQLDeveloper GUI client) that just creates constraints/indexes
  4. Run the create tables script on the new database
  5. Run the import with these options: ignore=yes constraints=no indexes=no
  6. Run the create constraints script

Now you have a 300 MB database. If you export from this, you end up with an export file that will create other 300 MB databases and you can share it with your friends.

Good luck!

P.S. Oracle DBAs might have a better way of doing this. I don’t know.