This allows for some simple form of netmask type patterning
which will work for /8, /16 and /24 subnets to be whitelisted
for ipv4, and for any multiple of /32 subnets for ipv6.
Automatically `block` certain clients based on severity of the
produced error messages. These clients are for sure doing something
bad, and we don't want to let them try this more times before dropping
their packets.
The block is issued immediately, but it only lasts a short time.
Most likely, additional messages will come in after that cause a
longer ban anyway.
This also forces overwriting of ipset entries without warning, which
helps to keep the ipset list in sync without further statekeeping.
The pattern list has been expanded with the instant_block integer
value, which indicates that if the pattern matches, the IP should be
dropped for how many seconds.
Compiling with -DDEBUG=1 will now create an extra verbose version
that can be used to debug the pattern matching in more detail.
The non-debug build is now less verbose, as a result.
Send a USR1 signal to the process to make it dump the current
state table.
Some of these come with a higher weight, as they're very obvious
points of abuse/probing, like attempting to use old protocols or not
being able to use modern key types.
We do not want to rely solely on one pattern for detecting login
attempts. This change creates a simple static list with patterns that
have a weight. If the pattern matches, the weight is added to the IP
score total. If the score total exceeds the max, the IP is blocked.
Previously we blocked on count=3, now we block when score=1.0.
The weight from the standard invalid user login is now dropped to
0.4 to have the same effect.
The `threshold` parameter is now therefore obsolete, and if found in
the config file, it will be ignored.
This is a much more reliable method to extract the IP address
from the log entries, and allows us to consolidate 2 matches into
a single operation.
Once matched, we extract the IP substring and pass it to `find()`
as usual. We can add more regexes later if that is useful.
This reverts commit dc8f37e41f.
This message can print on a normal and legitimate user when they
disconnect, and therefore would be a false positive. We should
100% never get close to blocking legitimate users, ever.
We can't just delete an entry only when it is blocked, this
would forever leave all entries lingering in the list until
they hit the limit, and it would likely consume lots of memory.
Instead, we'll prune only based on timestamp values. This removes
old entries automatically regularly, but leaves new hits that
haven't hit the expiry time. If IPs get blocked, they're not
removed, but the expiry time will remove them. This will
assure that hosts that try in large intervals actually get
blocked again right away.
Create `tallow` and `tallow6` ipsets, hook up to iptables
and create a single rule in the INPUT chain of the filter
table.
The ipsets created have `expire` timeouts set by default
which removes the need to do pruning, so we can erase entries
immediately from our LL when blocking.
This is a far more robust way of tailing the journal that seems
to work on 2 different journal versions. It's a bit more involved
and journal slowness may cause it to take several seconds to iterate
through the journal after a rotate or after startup, but it's far
more reliable than the old method.
I've also pushed all the output to stderr which makes the blocked/
unblocked messages end up in the journal itself.
Adds a signal handler to gracefully shut down in case of exit
signal, while doubling as a way to quickly dump the current
state table.
A journal tailing error workaround thanks to ssh-blocker.