Discussion:
[Libpqxx-general] Non-throwing functions
Trigve Siver
2010-12-15 13:50:23 UTC
Permalink
Hi,
I have some idea about non-throwing functions in pqxx. And want to know some
opinions.

As stated in [1] not all errors should/shall be treated as exceptional
behaviour. So I was thinking if it also applies for SQL. There are some types of
errors which are exceptional and the flow of the program could diverge. For
instance syntax error of SQL query is an exception behaviour and exception
should be raised (But I could ale imagine that for some query verifier app it
isn't). On the other side there could be SQL error about primary key duplicate
when inserting some records. This error could not be exceptional if we are
awaiting it. This situation could be solved for instance with checking if record
exists; if yes then show some error message, if not then insert record. Or we
could try to insert record and if record (with PK, UNIQUE, ...) already exist
show error message from the error code. Also we could wrap the code in the
try-catch block and try to catch desired exception. But it is expensive because
throwing/catching exceptions in C++ is expensive AFAIK (but for me biggest
problem is that this kind of error isn't exceptional one as stated above :)).

So I get this idea about adding overloads to some operations that will be
non-throwing (here I mean SQL non-throwing because some other C++ exceptions
could be thrown from somewhere deep in stack). I'm proposing for now adding
overload pqxx::transaction_base::exec(const std::stringstream &Query, error_code
&ec, const std::string &Desc=std::string()) as proof of concept. error_code
could be for now some simple struct with some error enums (this could be
detailed next). Of course I'm willing to implement it.

What do you think?

Thanks

Trigve

[1] -
http://blog.think-async.com/2010/04/system-error-support-in-c0x-part-1.html
Bryan Henderson
2010-12-15 23:29:11 UTC
Permalink
It sounds like a great idea to me, as a user. While I thoroughly appreciate
libpqxx turning query failures into C++ exceptions, there is also a need for a
lower level interface to the database in which you just tell the Postgres
server to execute a transaction. If the server executes it, regardless of the
outcome, you got what you asked for, so there's no exception.

I wouldn't want this service to interpret the error in any way -- it should
give exactly the information that the Postgres server gave. A layer above
that (maybe part of libpqxx, maybe not) could implement various forms of query
in which outcomes that the Postgres server calls an error would be alternate
forms of success. For example, if a subroutine call is defined as "insert
this row unless there is already one with the same key" then if the Postgres
server says "error: duplicate key", that's a success.

It would actually be better if the server itself could be made to treat that
as a success (with e.g. some option on the INSERT), but there's nothing we can
do about that.


But I don't agree with the principles of the error classes described in the
blog post. I believe an error is something that the caller is not in a
position to react to in any way other than to give up and report the problem
for human analysis. C++ exceptions with text descriptions are perfect for
this. If the caller can react to a particular class of "error," then it isn't
an error -- the caller got the information he asked for. And I don't think
there's any practical way to organize that information into generic classes,
because every method has different kinds of results.
--
Bryan Henderson San Jose, California
Trigve Siver
2010-12-16 09:25:08 UTC
Permalink
From: Bryan Henderson <bryanh at giraffe-data.com>
To: trigves at yahoo.com
Cc: libpqxx-general at pgfoundry.org
Sent: Thu, December 16, 2010 12:29:11 AM
Subject: Re: [Libpqxx-general] Non-throwing functions
[snip]
I wouldn't want this service to interpret the error in any way -- it should
give exactly the information that the Postgres server gave. A layer above
that (maybe part of libpqxx, maybe not) could implement various forms of
query
in which outcomes that the Postgres server calls an error would be alternate
forms of success. For example, if a subroutine call is defined as "insert
this row unless there is already one with the same key" then if the Postgres
server says "error: duplicate key", that's a success.
As stated in previous mail this is exactly what will be done when i.e. calling
exec(). The error code parameter will have the values similar to libpq's
PQresultErrorField return value (For detail see pqxx::result::ThrowSQLError()).
It would actually be better if the server itself could be made to treat that
as a success (with e.g. some option on the INSERT), but there's nothing we
can
do about that.
Could you elaborate why it would be better, please? Also with this approach you
will only solve some part of the whole problem.
But I don't agree with the principles of the error classes described in the
blog post. I believe an error is something that the caller is not in a
position to react to in any way other than to give up and report the problem
for human analysis. C++ exceptions with text descriptions are perfect for
this. If the caller can react to a particular class of "error," then it isn't
an error -- the caller got the information he asked for. ...
I think "error == exception" IS NOT always true. I think that term error is
maybe used unwise. From what I understand error is meant as "error code" or
something like that which could also contain codes that signals that operations
completed successfully (as used traditionally in C). For example If I run SQL
query (as mentioned above - inserting row which could exist) and SQL server
returns error that row already exist, it is SQL domain error and maybe I'm
waiting for this to happen and make appropriate actions if result was
unsuccessful. Reacting to this error with error message is exactly what we don't
want to do.
... And I don't think
there's any practical way to organize that information into generic classes,
because every method has different kinds of results.
I think that error_code, error_condition and error_category are designed well in
way that you can make a lot of error categories and connect it with error codes
and error conditions. I don't see there any serious flaw.
--
Bryan Henderson San Jose, California
Thanks

Trigve
Bryan Henderson
2010-12-17 02:52:31 UTC
Permalink
Post by Trigve Siver
As stated in previous mail this is exactly what will be done when
i.e. calling exec(). The error code parameter will have the values similar to
libpq's PQresultErrorField return value (For detail see
pqxx::result::ThrowSQLError()).
That's what I like about it.
Post by Trigve Siver
Post by Bryan Henderson
It would actually be better if the server itself could be made to treat
that as a success (with e.g. some option on the INSERT), but there's
nothing we can do about that.
Could you elaborate why it would be better, please?
The client/server protocol should recognize that duplicate key isn't always
exceptional for the same reason libpqxx should. Having successes and
failures properly separated all the way down the stack is cleaner than having
them mixed at the client/server protocol level and then separated out
somewhere above by code that analyzes error codes.

It's important to distinguish between a result that the caller expects and is
set up to react to and one of which the caller wants to remain ignorant. The
former would be the request-to-insert not inserting because of a duplicate key
(where that's normal). The latter would be the request-to-insert not
inserting because the system ran out of disk space. I call the latter a
failure, error, or exception and the former a success.
Post by Trigve Siver
Also with this approach you will only solve some part of the whole problem.
I'm sure there's some problem it doesn't completely solve. It does solve the
problem of exceptions and non-exceptions getting mixed up.
--
Bryan Henderson San Jose, California
Jeroen Vermeulen
2010-12-17 08:23:05 UTC
Permalink
Post by Bryan Henderson
The client/server protocol should recognize that duplicate key isn't always
exceptional for the same reason libpqxx should. Having successes and
failures properly separated all the way down the stack is cleaner than having
them mixed at the client/server protocol level and then separated out
somewhere above by code that analyzes error codes.
I agree with that last sentence. In fact it goes further down than the
client/server protocol: Trigve's problem is with the SQL layer itself,
and merely compounded by the design choices that have been based on it
in the higher layers.

For the given example, that looks like a deal-breaker to me. We may
want to find a better example.

What is the problem with the example? There are many, actually. If you
insert a row that violates a unique constraint, that's defined as an
error and the back-end treats it as such. Your transaction aborts.

To work around that in a dbtransaction (though not in a nontransaction),
either libpqxx or the app would have to wrap each such statement in a
subtransaction, just so that it could go "only kidding!" on failure.
That's extra code, extra SQL statements, extra network traffic, extra
savepoints on the server, extra risk of unexpected interactions e.g.
with other subtransactions. Forget about "exceptions are expensive."

Also, in SQL, constraints are the last line of defense against certain
classes of application bugs. That's simply the way they were designed.
Violating them is more like an assertion failure than it is like
"create the file if it didn't exist." In the example, if your app
doesn't bother to find out whether it's duplicating data until after
it's done the INSERT, it could be hiding some very nasty bugs. This is
different from when you create a file because at INSERT time, your app
has already collected and composed the row's data. You may think this
kind of bug would never happen to you but I've had to find and fix them.
Come to think of it I've had a grown man break down and cry because of
symptoms largely caused by this class of bug. Never saw him again.

If you want to use constraints as a way of checking whether it would be
valid to do something, also bear in mind that the constraints may be
deferred. A deferred constraint gets checked at commit time.

You could solve that by also requiring the app to check at commit, as
well as any point where the constraint might be re-enabled, plus of
course any point where a new applicable constraint gets created, plus
any other possible execution points where the insert (or constraint
creation) may get deferred to if the mapping from C++ to SQL execution
isn't very direct. (Not something I'd do when designing for this kind
of error handling, so another potential performance sacrifice there).
In practice I think most programmers would say "screw it, I'm doing the
insert here so that's where I check for the error." Then later they'd
make changes at some other abstraction layer that broke that assumption,
breaking the app in a way that unit tests probably wouldn't reveal and
integration tests probably wouldn't bother to check for. Exceptions are
generally a pretty effecive way to deal with such failures at
loosely-defined points in execution.

Then there's the problem of specificity of errors. What do you want to
do if your INSERT violates a different constraint than the one you were
expecting? You may care, you may not care. I can tell you what people
would do in practice: assume that the violation is the one they're
thinking of when they're writing the code. And there you have a
high-tech new way of forgetting to check for error returns.
Post by Bryan Henderson
It's important to distinguish between a result that the caller expects and is
set up to react to and one of which the caller wants to remain ignorant. The
former would be the request-to-insert not inserting because of a duplicate key
(where that's normal). The latter would be the request-to-insert not
inserting because the system ran out of disk space. I call the latter a
failure, error, or exception and the former a success.
There is currently no "request to insert" in SQL, only "insert." Adding
one would be an issue for the pgsql-hackers list. I'd be highly
surprised if it hasn't come up before, probably in the form of upsert.
It might be nice to have a "hey database, what constraints would I break
if I did this" command though.

I'm sorry to be so negative. My criticism is more about the example
than about the principle. Maybe we should try for another example?


Jeroen
Trigve Siver
2010-12-17 13:18:12 UTC
Permalink
From: Jeroen Vermeulen <jtv at xs4all.nl>
To: Bryan Henderson <bryanh at giraffe-data.com>
Cc: trigves at yahoo.com; libpqxx-general at pgfoundry.org
Sent: Fri, December 17, 2010 9:23:05 AM
Subject: Re: [Libpqxx-general] Non-throwing functions
You could solve that by also requiring the app to check at commit, as well as
any point where the constraint might be re-enabled, plus of course any point
where a new applicable constraint gets created, plus any other possible
execution points where the insert (or
constraint creation) may get deferred to if the mapping from C++ to SQL
execution isn't very direct. (Not something I'd do when designing for this
kind of error handling, so another potential performance sacrifice there). In
practice I think most programmers would say
"screw it, I'm doing the insert here so that's where I check for the error."
Then later they'd make changes at some other abstraction layer that broke that
assumption, breaking the app in a way that unit tests probably wouldn't reveal
and integration tests probably
wouldn't bother to check for. Exceptions are generally a pretty effecive way
to deal with such failures at loosely-defined points in execution.
Then there's the problem of specificity of errors. What do you want to do if
your INSERT violates a different constraint than the one you were expecting?
You may care, you may not care. I can tell you what people would do in
practice: assume that the violation is
the
one they're thinking of when they're writing the code. And there you have a
high-tech new way of forgetting to check for error returns.
Only question I have in my head for whole day is: When to throw an exception in
C++? Should SQL error really be propagated as exception to C++? I have examples
for and also against. If in some application I run some SQL query and the query
should always be successful, then SQL error is exceptional behaviour because I
cannot go further and I know I have some problem out there. On the other side if
I make some PG administration tool I can be sure that some SQL errors will
happen (when running some user query, etc) and I know this isn't exceptional
behaviour because I'm awaiting it. For me, an application in "normal" mode (that
is in situation when no external events could cause failure as network down, out
of memory, etc) shouldn't throw any exception during its lifetime (I mean here
that user shouldn't trigger any exception with it's bad input). If I got it
wrong I would be glad to read some text that will explain this in some more
details (i.e. when to throw and when not).
There is currently no "request to insert" in SQL, only "insert." Adding one
would be an issue for the pgsql-hackers list. I'd be highly surprised if it
hasn't come up before, probably in the form of upsert. It might be nice to have
a "hey database, what constraints would I break if I did this" command though.
Yes.
I'm sorry to be so negative. My criticism is more about the example than
about the principle. Maybe we should try for another example?
Better be negative rather then saying nothing :)

Thanks
Jeroen
Trigve
Jeroen Vermeulen
2010-12-20 04:06:02 UTC
Permalink
Post by Trigve Siver
Only question I have in my head for whole day is: When to throw an exception in
C++? Should SQL error really be propagated as exception to C++? I have examples
for and also against. If in some application I run some SQL query and the query
should always be successful, then SQL error is exceptional behaviour because I
cannot go further and I know I have some problem out there.
The fact that any SQL error aborts your transaction is, I think, a major
influence here. Especially since libpqxx is designed around
transactions. It might be different in other designs (though I think
that non-throwing errors plus abort-on-error would be a very welcoming
environment for application bugs).

One way to classify failures is to ask whether they are exceptional;
another is to ask what they imply for the caller's control flow. In the
case of a postgres SQL error, you need to "bail out" of the transaction.
Call it cleanup if you will. But that's a very close match for what
exceptions do. The natural, maintainable thing to do is to express the
handling of that condition in control flow.

If you're expecting the error then that could take the shape of an "if"
instead of a "try," but does it matter much?
Post by Trigve Siver
On the other side if
I make some PG administration tool I can be sure that some SQL errors will
happen (when running some user query, etc) and I know this isn't exceptional
behaviour because I'm awaiting it.
I think this example is a bit, no pun intended, exceptional. An SQL
failure is not so much expected as it is somebody else's problem. And
you still need to report it, which involves a separate code path--if not
"exceptional" then hopefully "less probable."

Also the impact of the choice we have to make here is highly localized
in this example. Your administration tool would have this in probably
just one place:

try
{
show_results(t.exec(user_query));
}
catch (const sql_error &e)
{
report_sql_error(e.what());
}

The alternative would be something like

error_code e;
result r = t.exec(user_query, e);
if (e.is_error()) report_sql_error(e);
else show_results(r);

I'm not arguing which form is better, just that it's not going to have
widespread consequences for this application. I feel we're also missing
the interesting part where we have to specify which failures we consider
non-exceptional.
Post by Trigve Siver
For me, an application in "normal" mode (that
is in situation when no external events could cause failure as
network down, out
Post by Trigve Siver
of memory, etc) shouldn't throw any exception during its lifetime (I mean here
that user shouldn't trigger any exception with it's bad input). If I got it
wrong I would be glad to read some text that will explain this in some more
details (i.e. when to throw and when not).
Bertrand Meyer defines an exception situation as one where a function's
preconditions are met but its postconditions can't be. I'm not sure how
helpful that is, since it just shifts the problem to the definition of
preconditions. But ultimately we're trying to find what works best, not
what would satisfy a textbook that tries to predict what will work best.


Jeroen
Trigve Siver
2012-03-19 14:37:27 UTC
Permalink
From: Jeroen Vermeulen <jtv at xs4all.nl>
To: Trigve Siver <trigves at yahoo.com>
Cc: Bryan Henderson <bryanh at giraffe-data.com>; libpqxx-general at pgfoundry.org
Sent: Monday, December 20, 2010 5:06 AM
Subject: Re: [Libpqxx-general] Non-throwing functions
Post by Trigve Siver
Only question I have in my head for whole day is: When to throw an exception in
C++? Should SQL error really be propagated as exception to C++? I have examples
for and also against. If in some application I run some SQL query and the query
should always be successful, then SQL error is exceptional behaviour because I
cannot go further and I know I have some problem out there.
The fact that any SQL error aborts your transaction is, I think, a major influence here.? Especially since libpqxx is designed around transactions.? It might be different in other designs (though I think that non-throwing errors plus abort-on-error would be a very welcoming environment for application bugs).
One way to classify failures is to ask whether they are exceptional; another is to ask what they imply for the caller's control flow.? In the case of a postgres SQL error, you need to "bail out" of the transaction.? Call it cleanup if you will.? But that's a very close match for what exceptions do.? The natural, maintainable thing to do is to express the handling of that condition in control flow.
If you're expecting the error then that could take the shape of an "if" instead of a "try," but does it matter much?
Post by Trigve Siver
On the other side if
I make some PG administration tool I can be sure that some SQL errors will
happen (when running some user query, etc) and I know this isn't exceptional
behaviour because I'm awaiting it.
I think this example is a bit, no pun intended, exceptional.? An SQL failure is not so much expected as it is somebody else's problem.? And you still need to report it, which involves a separate code path--if not "exceptional" then hopefully "less probable."
? try
? {
? ? show_results(t.exec(user_query));
? }
? catch (const sql_error &e)
? {
? ? report_sql_error(e.what());
? }
The alternative would be something like
? error_code e;
? result r = t.exec(user_query, e);
? if (e.is_error()) report_sql_error(e);
? else show_results(r);
I'm not arguing which form is better, just that it's not going to have widespread consequences for this application.? I feel we're also missing the interesting part where we have to specify which failures we consider non-exceptional.
Post by Trigve Siver
For me, an application in "normal" mode (that
is in situation when no external events could cause failure as network down, out
of memory, etc) shouldn't throw any exception during its lifetime (I mean here
that user shouldn't trigger any exception with it's bad input). If I got it
wrong I would be glad to read some text that will explain this in some more
details (i.e. when to throw and when not).
Bertrand Meyer defines an exception situation as one where a function's preconditions are met but its postconditions can't be.? I'm not sure how helpful that is, since it just shifts the problem to the definition of preconditions.? But ultimately we're trying to find what works best, not what would satisfy a textbook that tries to predict what will work best.
Sorry for bringing the old topic here, but today I found one example I need to handle in my app. I have 2 tables, T1, T2. "T1.id" is PK, "T2.id_ref" is FK to T1.id. NO cascade delete is permitted. Let there be some reference to some T1 item in T2 tabe. So now, I want to delete item from T1. If I delete the record I got exception about FK violation. I need to propagate this to user. I've got 2 options here. (I want to mention also that whole logic of the app is written in python, it's embedded python in my C++ application - it isn't really important for this context, but want to be precise ;)).

1. options is to handle this specific exception (propagate it to python) and handle it appropriate.
2. option is to write function in SQL that will check if there is some reference of T1 item in T2 table. If yes show error, else proceed next.

For me, the option 2 is "cleaner" sollution to me, but also a bit more tought than the first one.

The first one is easier, becaus I only need to propagate exceptions to my logic and handle it there.

What do other thinks?
Jeroen
Thanks


Trigve
Jeroen Vermeulen
2012-03-19 19:05:08 UTC
Permalink
Post by Trigve Siver
Sorry for bringing the old topic here, but today I found one example
I need to handle in my app. I have 2 tables, T1, T2. "T1.id" is PK,
"T2.id_ref" is FK to T1.id. NO cascade delete is permitted. Let there
be some reference to some T1 item in T2 tabe. So now, I want to
delete item from T1. If I delete the record I got exception about FK
violation. I need to propagate this to user. I've got 2 options
here. (I want to mention also that whole logic of the app is written
in python, it's embedded python in my C++ application - it isn't
really important for this context, but want to be precise ;)).
1. options is to handle this specific exception (propagate it to
python) and handle it appropriate. 2. option is to write function in
SQL that will check if there is some reference of T1 item in T2
table. If yes show error, else proceed next.
For me, the option 2 is "cleaner" sollution to me, but also a bit
more tought than the first one.
The first one is easier, becaus I only need to propagate exceptions
to my logic and handle it there.
What do other thinks?
In my experience, this usually means that you're approaching a problem
from somewhere in the middle. It becomes easier if you can approach it
from above. Maybe the whole question of exceptions doesn't arise.

Is that too vague? :) What I mean is that there are probably just one
or two places in your application where you want deletions from T1.
(Otherwise you'd probably have allowed cascading deletes.) Does that
happen in a place where you can cleanly, comfortably, sensibly deal with
any T2 rows that might refer to the T1 rows you're going to delete? For
instance you might simply want to delete the T2s first, or null out
their foreign keys.

If you find you can do that, deal with the problem at that point and
you're done. If not, go one step up in the call tree and ask yourself
the same question again. Try to find the place where the problem is
simple and solve it there, even if that means more code, and even if it
means slower code.

If you let the FKs issue come down to the last moment and the
bottom of the call tree, you typically have less overview, fewer
choices, less control over the circumstances, more complicating
interactions. It's easy to get stuck on deeply technical questions that
way, or optimize without gaining anything.


Jeroen
Trigve Siver
2012-03-20 07:41:22 UTC
Permalink
From: Jeroen Vermeulen <jtv at xs4all.nl>
To: Trigve Siver <trigves at yahoo.com>
Cc: "libpqxx-general at pgfoundry.org" <libpqxx-general at pgfoundry.org>
Sent: Monday, March 19, 2012 8:05 PM
Subject: Re: [Libpqxx-general] Non-throwing functions
Post by Trigve Siver
Sorry for bringing the old topic here, but today I found one example
I need to handle in my app. I have 2 tables, T1, T2. "T1.id" is
PK,
Post by Trigve Siver
"T2.id_ref" is FK to T1.id. NO cascade delete is permitted. Let
there
Post by Trigve Siver
be some reference to some T1 item in T2 tabe. So now, I want to
delete item from T1. If I delete the record I got exception about FK
violation. I need to propagate this to user. I've got 2 options
here. (I want to mention also that whole logic of the app is written
in python, it's embedded python in my C++ application - it isn't
really important for this context, but want to be precise ;)).
1. options is to handle this specific exception (propagate it to
python) and handle it appropriate. 2. option is to write function in
SQL that will check if there is some reference of T1 item in T2
table. If yes show error, else proceed next.
For me, the option 2 is "cleaner" sollution to me, but also a bit
more tought than the first one.
The first one is easier, becaus I only need to propagate exceptions
to my logic and handle it there.
What do other thinks?
Thanks for reply,
?
In my experience, this usually means that you're approaching a problem
from somewhere in the middle.? It becomes easier if you can approach it
from above.? Maybe the whole question of exceptions doesn't arise.
Is that too vague?? :)? What I mean is that there are probably just one
or two places in your application where you want deletions from T1.
(Otherwise you'd probably have allowed cascading deletes.)? Does that
happen in a place where you can cleanly, comfortably, sensibly deal with
any T2 rows that might refer to the T1 rows you're going to delete?? For
instance you might simply want to delete the T2s first, or null out
their foreign keys.
I think that I cannot cleanly deal with T2 rows when deleting the T1 row. The example is like CUSTOMER (T1) and ORDER (T2) relationship, when each ORDER needs to have associated CUSTOMER. Note also that ORDERS handling is done in another application module than CUSTOMER. So CUSTOMER module doesn't know there exist something like ORDERS, etc. When user wants to delete the CUSTOMER row, I cannot tell if the row is still referenced. It could also be referenced by another tables (INVOICE, ...). How this situation is handled in practice?
?
If you find you can do that, deal with the problem at that point and
you're done.? If not, go one step up in the call tree and ask yourself
the same question again.? Try to find the place where the problem is
simple and solve it there, even if that means more code, and even if it
means slower code.
I don't know if I could go one step up. Where else should I handle it?
If you let the FKs issue come down to the last moment and the
bottom of the call tree, you typically have less overview, fewer
choices, less control over the circumstances, more complicating
interactions.? It's easy to get stuck on deeply technical questions that
way, or optimize without gaining anything.
Jeroen
Thanks

Trigve
Jeroen Vermeulen
2012-03-22 02:54:02 UTC
Permalink
Post by Trigve Siver
I think that I cannot cleanly deal with T2 rows when deleting the T1
row. The example is like CUSTOMER (T1) and ORDER (T2) relationship,
when each ORDER needs to have associated CUSTOMER. Note also that
ORDERS handling is done in another application module than CUSTOMER.
So CUSTOMER module doesn't know there exist something like ORDERS,
etc. When user wants to delete the CUSTOMER row, I cannot tell if the
row is still referenced. It could also be referenced by another
tables (INVOICE, ...). How this situation is handled in practice?
Often you don't really want to delete a record, because historical
information is also important. In a situation like that, the solution
is often to add a Boolean column such as ?is_active?; and instead of
deleting rows, you just set is_active to false.

But if you do want to delete customers, one thing you can do is have the
order module register a callback with the customer module: if you're
going to delete a customer, call this function first so that I can check
if the customer can be deleted in its present state, or just delete that
customer's orders, or whatever makes sense for that particular relationship.

This sounds very close to just doing it in the database, but sometimes
it makes perfect sense to punt the problem to the user at the
application level: ?you can't delete this customer, because an order is
still in progress.?


Jeroen

Bryan Henderson
2010-12-17 15:53:45 UTC
Permalink
Thank you for a thorough and convincing explanation of why simply calling a
duplicate key a success on the client end isn't equivalent to a conditional
insert.

And for the record, my reference to an SQL conditional insert was purely
hypothetical, meant to illustrate the abstract philosophy (separating success
and failure) I'm following in analyzing Trigve's proposal.

In my own work, I don't think I've ever analyzed a Postgres error. (I instead
do the advance check of e.g. existing row). So I don't have another example,
and maybe SQL/Postgres was designed so there isn't one.


So that leaves the separate objective of roughly the same libpqxx change,
which Trigve brings up again today: encapsulating an entire level of failure:
"Tell the server to insert a row" is an entirely different request from
"insert a row." In the former case, if libpqxx were to report that the server
refused to insert -- for any reason -- that's a success. The PG
administration tool is the best example I can think of of an application that
would use "tell the server to insert a row."

But libpqxx's current API considers refusal by the server to be just as
exceptional as if libpqxx was unable to get the request to the server at all.
--
Bryan Henderson San Jose, California
Loading...