-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Log file timestamp not updated on some file systems. Support periodic sync to flush metadata to FS #3593
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@roytmana, I am reluctant to push a fix for an OpenJDK bug that will probably be addressed by either the OpenJDK or Azure in the future. Would it be possible for you to repeat the following in a background thread?
|
Hello @vy Sorry for the delay I had that hack working before I submitted the ticket but to be absolutely sure I do not mislead you I wanted to get Ops deploy my test to Azure to in exactly the way our prod apps are I just got a confirmation from them that it did solve the issue. I did not do it in a background thread I instead copied rolling file appender and overwritten append() method where I used reflection to get the output stream on every write So if it is feasible to add something like We have a support ticket opened with microsoft on the issue but it does not seem to be getting anywhere. I can see if I can get the link to post here but not sure if it is publicly available Thanks again for your prompt response! |
@vy |
@vy I can test it as well (not with log4j but with a plain text file)
|
@roytmana, double checking: Does immediateFlush not help? |
I disagree, if the performance penalty was be negligible, that option would be on by default. Microsoft will probably solve the problem sooner or later, so I see no reason to modify the behavior for all Log4j Core users, however we can:
That would be really strange: |
I just confirmed that i tried SYNC and DSYNC on a text file (and it will require bunch of other options to be supplied like CREATE, APPEND etc) it does not appear to fix the issue but I may need to test it a bit more with new file vs appended existing etc if we want to give that a try |
Do you have similar issues in Log4cxx and Log4net on Azure? |
we are a java shop :-) |
Trying to figure out if this is Java specific. |
we are java and we run on linux (redhat 8) containers in azure Kubernetes. Unfortunately I have no way to test. if you could give me an executable I can see if we can run it from windows machine to azure file share but it'll be apple to oranges likely as is the OS/networking stack or maybe SMB file shares that are at fault I appreciate very much your sticking with it because it is really not a log4j issue I was just hoping that an enhancement for periodic fsync would be helpful enough in general and non-intrusive enough to make it to the codebase :-) |
I feel like I might have seen something similar a long time ago, but I can't recall what it was. I definitely haven't heard of anything like this happening recently though. It sounds like an OS or JVM issue though, as metadata such as the last updated timestamp should be handled by the filesystem. Interestingly it seems that NTFS may not update the last access timestamp immediately:
That's probably not relevant here, unless the filesystem where logs are being written to is somehow NTFS. |
@rm5248 , @ppkarwasz on NTFS I often see that the file size is not updated for log files. Our few azure systems use azure analytics instead of log4net. |
In our case timestamp is not updated at all and after 60 days when archival process (external to the app) looks for old logs it hits on live open log files and delete them completely breaking logging - log4j would not recover and will continue to "write" to deleted file writing nothing and not re-creating the file I do not know what actual file system is behind those azure file shares but they are SMB shares mounted in Red Hat ubi8 based PODS on Azure AKS |
Hello we are experiencing issue on Azure file shares where log files metadata (particularly last updated timestamp) is not being updated till log file is closed. We confirmed that it is not log4j issue but a problem with any FileOutputStream on this file system.
Would it be possible to add functionality to file manager implementations whereby FileOutputStream's file descriptor would be fsynched with a configured Appender's frequency
FileOutputStream.getFD().sync();
Or perhaps a more generalized solution
The text was updated successfully, but these errors were encountered: