- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
All Stories
Dec 18 2023
Okay, now we have pass the warnings down to gpg and gpgsm so the problem will be easier to analyze. We also stop trying after 10 seconds. Sample error messages:
My hypothesis about what happens:
- I enter a folder with some new messages. KMail starts a sync.
- I read some of the new messages. They are marked as read in KMail and (hypothesis) Akonadi records the status changes for later because a sync is running.
- The sync finishes. KMail shows the new messages again as unread which (hypothesis) is what the sync reports.
- In the background Akonadi syncs the deferred status changes to the server. (hypothesis)
Both the company and me are running debian dovecot.
It seems I'm using Exchange (account at my old university and o2mail.de).
I have yet to reproduce this so I had not yet triaged this. The usual case to forward attached mail in Outlook is with .msg files but I recently noticed that Outlook on the web allows you to save mail also as .eml. Also .eml should in theory be much simpler to handle.
Could you share what IMAP server software do you run personally and in the office (probably Dovecot or Cyrus IMAP?).
Your comment on speed might also be why I do not see this issue. Nearly all of my mails and all my large folders go through my private mail server that stands at a dedicated hoster. While our company mail server is located in the office and only reachable through the office internet connection with VPN afaik. I had a tool / command to deliberately slow down connections on some port maybe you can use something like that? I don't think that we can give you access to the company mail server / VPN since you are not a regular employee.
Oh yeah! I was looking for a way to Integrate LLMs / GPT Models into our code. Let us change gpgme_data_indentify so that it queries an online service about what to do with such a file 😅 I guess that is how Microsoft would implement such a feature nowadays. Gathering training data in the help of humanity.
We should sell this as AI or at least as "smart file drop". ;-)
@jukivili Thanks a lot. Please push the change to 1.10 branch and master.
Dec 17 2023
Dec 16 2023
Attached patch should workaround the issue:
We were hoping before christmas. But it is unlikely due to some other stuff we had to do. Early Jan. Definitely a priority for us right now to get it out.
But I guess syncing a second client should do the trick to get the server state. At least ebo has afaik both claws and kmail configured with the same server.
No, our webinterface is telnet :)
Dec 15 2023
Is there also a web interface for the @gnupg.com mail server? It would be useful to be able to check what's the read/unread status on the server.
I saw this recently on a imap subfolder with between 4- and 5000 mails. I have marked a few hundred new ones as read in one go. The folder does not even have mail threads in it and I've never used that function anyway.
This was on my work account @gnupg.com (the only account where I use KMail). Should we ask Werner for details of the server?
@werner Any news on when will 2.4.4 will land? I cannot figure out how to build the project from source, and I couldn't adapt the Fedora packaging to build it either. I would like to have a way to finally sign my git commits.
first draft is up at https://invent.kde.org/pim/libkleo/-/merge_requests/67
The issue was obvious but I looked at the wrong place. I looked for a ref counting error but the issue was that the control only returned a temporary pointer that had exactly one reference.
The Proxymodel approach seems to work; I can't find any fundamental problems due to having more rows in the proxy model than in the source model. Since this is the least invasive approach - with (almost) all changes being contained in the new model, I'm going to continue with this approach for now.
I'm seeing this on an inbox with about 4,000 messages. It may depend on the server (speed) because I'm not seeing this on larger folders on another server. But it does happen for more than one server. I'm not using "Ignore thread". Just the plain old mailing-list style message list. I'll keep an eye on when it happens for which folders.
I would experiment with replacing the flat keylist model with a flat userid keylist model. For places where we only want to see the primary user IDs we could simply put a filter proxy on top. Obviously, that's a big architectural change so I'd expect some breakage. Maybe we start with adding a new model but keep in mind to replace the old model. Or we immediately replace the old model with a primary-user-ids-proxied new model.
Also, are you using the "Ignore thread" function?
Can you be more specific how much is "many messages"? Is it tens, hundreds, thousands, tens of thousands? :)
Thank you for your report.
I suggest to replace size.width() with qMax(size.width(), minWidth) where minWidth is the width of a reasonably sized text (to account for different text sizes) instead of trying to fight with the combo box. Combo boxes are not a good UI element for long entries.
If I understand you correctly we will then have the hirarchical keylist model, the flat keylist model and then as a new model the userid keylist model in libkleo/src/models/keylistmodel ? To be honest you probably know best how to implement this in the most useful way.
I just rechecked we are actually not including the root certificate but we are including the intermediate certificate. Since there never were any complaints about this let us not change this. The original reporter must have somehow deleted the intermediate certificate or it was with an older certificate from us.
Shouldn't that be the difference between SizeAdjustPolicy AdjustToContentsOnFirstShow and AdjustToContents?
I do not think it could cause any harm, if a certificate is re-issued we can adapt and worst case we would ship a very small obsolete intermediate. And it would be just one less of a potential problem when verifying our signature that on this PC at the time the intermediate certificate is not available. Having a self contained chain in the signature is also helpful for scripted verification checks where you would then just need to check that the root CA is trusted and then can check everything offline.
And we take a bit of pride in the fact that we can easily be run on offline systems and there this might actually create a bit of a hassle to get the certificate in there. This would also allow for a more easy verification using osslsigncode itself independent of Microsoft tools.
Gpgpass already installs a desktop file I just overlooked it.
Dec 14 2023
As far as I can tell, the sizeHint is "correct", for the items that are currently in the combobox. At the point in time of creating the dialog, the combobox only contains two items ("new key" and "no key"), which both have shorter strings than an average key description. The actual keys are only added to the combobox at a later point. I tried to make the dialog's size update when that happens, but have not managed to get it working yet, i think that some cache is not being invalidated correctly.
I'm not sure if a proxy model is the best idea to explode the keys into user IDs. In particular, exploding the user IDs after filtering the keys sounds wrong because you would have to put another filter proxy on top to filter the user IDs. It might make more sense to have a proper model with all user IDs and then filter for primary user IDs if only those are needed.
I don't think that it is a good idea to include the chain. Sometimes certificates are re-issued - they are still valid but signed by another top level cert. The certificate also has the URL from where to fetch the intermediates. Let's close this.
Werner and Tobias are both correct. If a new subkey is generated from scratch then gpg uses the current time as key creation time and sets the expiration date (in the internal in-memory representation of a public key) to the key creation time plus the expiration value.
Sorry, I should have been more precise in my description of the problem. Specifically with --quick-addkey, gpg's behavior seems to be that the expiration, when given using seconds=... is treated as seconds from now.
Dec 13 2023
FWIW, when updating the expiration time gpg does this:
My explanation of gpgme's behavior was not quite correct: Specifically in the QGpgMEQuickJobs for creating (sub)keys, the API uses QDateTimes, which are then converted to seconds since epoch.
That's both not correct. gpg takes the expiration time in seconds since creation time. For a new key this is close to the corrent time but not really. For an prolonging an expiration, this is of course different - the creation time of the key needs to be taken in account. I recall that we once had a discussion and agreed to keep it at time after the creation of the key. This avoids problems with the expiration going negative.
In gpg you may also specify the 4xpiarion date in ISO format. afaic, gpgme supports this.
Sorry for the fallout and thank you for taking care of it.