Differences in behaviour between Hibernate delete queries and the old way

In Hibernate 2, you could sort-of do a bulk delete via the Session interface. In Hibernate 3, they have true bulk deletes, ala Section 3.11 of the EJB 3 Persistence API. However, they have some significant differences which people migrating to the new functionality need to be aware of.


The main difference is a difference in behaviour. Under Hibernate 2, the Session.delete(String) method took a HQL query string. It then executed that query, iterated over the result, and called Session.delete(Object) on each object. This meant that the performance was relatively slow (n+1 database calls, where n was the number of objects to delete). OTH, it also meant that the various side-effects of deleting a single object (cascades, cache updates, etc) were respected.

In Hibernate 3, the Session.delete(String) method is deprecated; it’s no longer on the main Session interface, but it is available on the ‘classic’ interface. Instead, they now have support for DELETE queries. These get translated to a proper SQL delete query, resulting in only one database call. Much faster, BUT it doesn’t seem to respect the side-effects.

So, here’s the list of differences you want to be aware of when migrating:

    The calling method has changed. Instead of session.delete("from Foo"), it’s now session.createQuery("delete from Foo").executeUpdate().

  • The HQL syntax has restrictions. These are:
    • You can not use aliases. So "delete from Foo foo where foo.bar = :bar" isn’t valid, while "delete from Foo where bar = :bar" is.[1]
    • No inner joins in the query (you _can_ use subselects in the where clause, for similar behaviour)
    • And, of course, you need to have “delete” at the front.
  • Using a bulk delete query is not the same as deleting the objects one after another. You do _not_ get cascading deletions, so if you want them, you’d better configure your database for them instead. I’m don’t know the impact on cached objects (either in the session or the second-level cache) – test with care.
  • Again: pay real care to relational constraints. For example: using a bulk delete will not delete entries from join tables, or delete dependent objects. If you need this, and you want to use the bulk update, you will need CASCADE DELETE turned on in the database.

In short:

  • treat DELETE queries with care; they are not a simple upgrade from the old mechanism. It is entirely different, and it has different behaviour.
  • Read and understand the “relevant” sections of the EJB 3 Persistence spec[2].
  • If in doubt, use the ‘classic’ version to preserve the behaviour. That’s certainly what I’ll be planning on doing.

There may well be more differences; I’m not finished exploring this behaviour yet, by any means, and it’s not very well documented. Most of the Hibernate documentation says “see the EJB spec”, and the EJB spec itself doesn’t elaborate on the implications very well (for example: there is no mention of the cascade behaviour, yea or nay, in Section 3.11 where the bulk deletes are defined[3]).

As I find out more, I’ll update this posting (particularly if I find out the interaction with the caches).

(I’d also like to stress I’m _not_ trying to diss the Hibernate team; it’s a great product that I really enjoy using. But the overlap with EJB 3 is more than a little rough, guys, and this is a “Here Be Dragons” area)


[1] Curiously, this one is actually going against the draft EJB spec; it’s the spec that’s wrong. Honest. Straight from the horse’s mouth

[2] I wish I knew what these were… try Section 3.11 and 2.3.2 of the draft spec, at a minimum.

[3] I’m sure that Hibernate implements the intent of the spec – Gavin King, after all, helped draft the spec. But discerning that intent isn’t an easy task for mere mortals such as myself.

Author: Robert Watkins

My name is Robert Watkins. I am a software developer and have been for over 20 years now. I currently work for people, but my opinions here are in no way endorsed by them (which is cool; their opinions aren’t endorsed by me either). My main professional interests are in Java development, using Agile methods, with a historical focus on building web based applications. I’m also a Mac-fan and love my iPhone, which I’m currently learning how to code for. I live and work in Brisbane, Australia, but I grew up in the Northern Territory, and still find Brisbane too cold (after 22 years here). I’m married, with two children and one cat. My politics are socialist in tendency, my religious affiliation is atheist (aka “none of the above”), my attitude is condescending and my moral standing is lying down.

7 thoughts on “Differences in behaviour between Hibernate delete queries and the old way”

  1. I am surprised that hibernate or JSR 220 didn’t include an explicit “cascade delete” option that behaved like the classic hibernate delete.

    There is nothing inherently bad about the hibernate cascade delete other then the n+1 (but if the method was named suitably, then people would understand what they are getting into when the call the method).

    Cascading deletes is one of the scary things in RDBMSes (at least it is for this person, who still wakes up screaming at night… jooohnnnnyyyy).

    I know the EJB 3 spec isn’t finished yet, but it would be a bit sad if they left cascade deletes out (the logic would be “well RDBMSes do that best in one call” -but this would mean essentially keeping your relationship logic in the database as well as the hibernate/ejb3 mappings).

    Will be an interesting one to watch.

  2. (I am of course implying that the classic hibernate Session.delete(HQL) was not explicit – maybe it is and I am a bit dim).

  3. The EJB 3 spec _has_ cascade deletes. Argubly, the way Hibernate works is compliant (certainly Gavin thinks so, and he helped write the spec).

    The way Hibernate 3 works is that deletes cascade when they are called on individual entities (via the Session.delete(Object) method). This, of course, is what Hibernate 2 did under the covers. It’s only when you use the bulk DELETE query that you don’t get the cascade behavior.

    The EJB 3 Persistence spec, in its current form (2nd public draft) has exactly 0 words on the interaction of bulk delete or update queries with the EJB lifecycle. According to comments from Hibernate developers, the EJB lifecycle documented in Section 2.3 (where the cascade rules are defined) refers only to in-memory instances that you call remove() on explicitly.

    Section 3.11 of the EJB spec defines the bulk UPDATE and DELETE queries. Section 2.3 defines EJB lifecycles – Hibernate objects conform to a similar (but not identical) lifecycle.

  4. There are two different kinds of thing here:

    (1) Fetch a bunch of objects into memory, delete them all, respecting cascades and other lifecycle. You can still do this in HB3, just run a query and iterate over the results, calling delete(). There is no need for the redundant deprecated method.

    (2) True bulk delete, which no-one truly expects to respect cascades. (Yes, perhaps it would be *possible* to implement this efficiently in *some cases*, but no-one has ever implemented such a thing before and it looks *very* difficult.) Actually, IMO, bulk delete is not really that useful; it is the bulk *update* that is the most useful piece here.

  5. Gavin, the only real problem I have with it is that it’s not very clear in the documentation that is the case. 🙂 The Hibernate doco says “see the EJB 3 spec”. The EJB 3 spec doesn’t say anything, one way or the other, about the cascade behaviour.

    Perhaps no-one who understands the difficulties expects cascades to be respected; but naively I did, and I’m sure other people naively have as well. Perhaps the use case I had for it (a test utility to reset the database to a clean state) is unusual for production code. 🙂

    In any case, once I became aware of the differences, the differences weren’t a problem; it was the lack of knowledge that hurt. This (and the other) blog entries are largely to help me never forget.

  6. I agree with Robert. I would be naive and “hope” that uber-smart Hibernate would know what to do if I am using HQL for bulk delete.

    Bulk delete with cascading is nice. For example, I need to remove all record of a clients project details. Of course, in practice they tend to be “logical” deletes anyway, so perhaps this is a storm in a teacup.

    However I understand that “true” bulk deletes would be pretty hard to do without just effectively doing fetch into memory, call delete, cascade underneath.

    But now I know to explicitly do it – great !

  7. the main thing about versioned data is that there’s a common requirement for following some sort of expiration protocol. i use hibernate because it makes it easy for me to map an object tree. now when it comes to deleting this tree (assuming for a moment database cascading doesn’t exist) who knows best what the mapping pattern is? yup, hibernate – as knows most about the exact mappings/relationships.

    loading this entire archived tree into memory is a waste of time and resources if i follow the H2 style deletion. H3 deletion is a good step forward but practically of no use in this regard. as of this writing i still don’t see an efficient solution to large tree deletions (bulk or not).

Leave a comment