Could someone please explain how old messages get purged in MeshChat? Currently, on my node, there are multiple messages from the future (March 25, 2031), that will not go away. All new messages appear lower in the list after these messages from the future. http://n5mxi-pi.local.mesh/meshchat
David - N5MXI
HHUS #5456
OK, seriously, your node should never have allowed those messages into the message database as the date is over 30 days in the future. What that really tells me since they are in the message DB is that the time on your node is not correct. I am willing to make a bet that the time on the node is around that time. I would highly recommend getting an NTP server on the mesh network so that the nodes will start to be coordinated and will not vary too much.
As for how the messges get purged from the database, this is a setting in the config file. There is the max_messages_db_size setting that is the maximum number of messages to maintain. Any messages beyond this amount (i.e. older messages as the messages are sorted by timestamp) will be dropped and not written to the message DB. Consequently when the time on the node gets corrected, those messages in the future should drop out of the message DB also as they are greater than 30 days in the future (again default value set by valid_future_message_time).
I just looked back at your message and realized that the URL that you provided indicates that it is a Raspberry Pi. It is unusual for the time to be off so much on a Pi, but it is possible. I would be interested to know what you find in this respect. Also it would be good to know what version of MeshChat you are running on the Pi.
Setting up a NTP server is on my to-do list however I did confirm D & T are correct on the Pi. The correct time zone is set in raspi-config. My MeshChat has only been online for 1 week.
Version info:
max_message_db_size=500 (This seems rather large. Is this correct?)
Thanks for your help. Awaiting further instructions.
"max_message_db_size=500 (This seems rather large. Is this correct?)"
That is default.
I wonder what happens if you delete those lines in
/var/www/html/meshchat/db/messages.MVMchat
?
Or edit the 2nd column to a lower date value?
Then restart.
73, Chuck
Chuck,
I tried deleting the messages per your instructions and all was well until the next sync with other nodes in Zone: MeshChat. All the deleted messages reappeared. I looked on some other nodes and it appears that I'm not the only one looking into the future.
David, it sounds like you are having a similar problem to what we had in Oregon a while ago. Your database of messages isn't exactly corrupted but the dates are off because of some issue. OK.
The short answer to your initial question is that the old messages do NOT go away. I think there might be a message number limit and eventually the oldest get dropped off but we have very old messages at the bottom of the list. Your problem with dates might mean they are at the top of the list? MeshChat syncs with every other similarly named instance, and each node re-sorts it's message database according to date/time stamp. We proved this by disconnecting a node from our mesh and posting simultaneous messages with another node on mesh ... then put everything back on mesh and both nodes adjusted their message database sequence to show the messages sorted by the date/time stamp. Pretty cool actually.
The only way to completely get rid of a corrupted message database (without being a programming guru) is to go to EVERY instance with the same linked name and REMOVE the package from the node. If one installation remains, then when you re install the package and put back on mesh every node will still find and accept the old corrupted data. At one point we gave up on the name of our meshchat statewide and just started a new version. There was something else going on about package versions and firmware versions .... but I've only had one cup of joe this am.
Ed
Perhaps I should have assumed that there are several MeshChat instances on your network using an identical Zone.
I found 58 instances of Zone=MeshChat on my SUPERNODE.
I found 85 instances of Zone=MeshChat+1 or more characters.
I suggest that you change your Zone-name to something unique, delete the offending messages, reboot.
Then invite your neighbors to change their Zone-name, delete the offending messages, then reboot.
I hope this helps,
Chuck
I am now the maintainer of MeshChat and I would be interested to see a copy of the message DB. If you don't mind you can send it to me at wt0f_at_arrl.net. I am really interested in what those message entries look like.
The only other thing that I can see that is going on is if the timestamp for those messages is '0', but that does not explain the year 2031. Unless for some reason a library is representing '0' to be that date instead of 1/1/1970. That just sounds wrong to me though. Unfortunately, I can not do much more without seeing what the message DB looks like.
What would also be helpful is if you created an issue for this up at GitHub. It would allow the problem to be tracked and tie the fixes to the issue. You will find the issues at https://github.com/hickey/meshchat/issues. If you wanted you could upload the messages DB there that would be fine, or when I receive the DB from you I will isolate the message entries that are in question and add them to the issue.
K7EOK is pretty much correct with the fact that the messages will continue to be replicated between all the MeshChat instances. What might need to be done is to kill off the meshchatsync processes on all the MeshChat instances so that the message entries can be removed from the message DB. Once all the message DBs have been updated then the meshchatsync processes can be restarted. This is due to the decentralized nature of MeshChat.
Designate one MeshChat instance as the source for the message DB and kill the meshchatsync process. Once this process is dead the instance will nolonger sync messages (i.e. pull) with other instances. Update the message DB to remove the offending message entries. Then on all other nodes remove the directory where the message DB is located. Typically /tmp/meshchat on nodes and on Linux machines it could be anywhere and need to check the meshchatconfig.lua file for the location. One should also kill the meshchatsync process on each instance as the message DB directory is being removed (otherwise the messages may get reintroduced from a instance that has not been updated yet). Once all the instances have had their DB directory removed, all instances can have meshchatsync started again and all databases will be updated from the designated "master". Once this is done there is not master copy of the DB any more and all instances are equal.
We established a new Meshchat zone, and built a new meshchat/db/messages.<newname> file that had most of our old messages and none of the "new" ones. This worked for a while, but a few days ago, we got corrupted again, maybe while someone was setting up or maintaining a system that had the bad messages in a zone that was either configured or advertised, and the new-name system tried to sync to it. We changed the name again. That lasted slightly over one day before getting corrupted again last night about 2049 local time.
We are trying to decide what to do. Obviously WT0F is the guru for this, so I will be studying his responses for an answer. Any additional inputs will be appreciated.
--Tim K5RA
The file /meshchat (in the root directory of the pi) has all messages. Mine has over 3700 messages dating back to 10 Feb 2024. The first corruption happened on 17 Feb 2024 at 2125 local. The second message in the sync was from K7EOK, and the whole transfer was about 1200 messages all with the same date/time.
It has been a lot of fun ... not.
-Tim -K5RA
What file has the config parameter valid_future_message_time ?? This is the first I have heard of it. I have MeshChat 1.02 running in R-Pi with the latest bookworm, and Mashchat-api_2.10.0 running in a hAP ac lite running the latest nightly build for the last week or so. I think I checked all four config files in /usr/lib/cgi-bin and did not see it.
Thanks.
-Tim K5RA