10 Digit Numeric Wordlist

Regular Expression Examples is a list, roughly sorted by complexity, of regular expression examples. It also serves as both a library of useful expressions to include in your own code.

For advanced examples, see Advanced Regular Expression Examples You can also find some regular expressions on Regular Expressions and Bag of algorithms pages.

See Also

Example Regexes to Match Common Programming Language Constructs

The preprocessor is used to combine similar rules into one source line. For example, if you need to make John try lowercased words with digits appended, you could write a rule for each digit, 10 rules total. Now imagine appending two-digit numbers - the configuration file would get large and ugly. With the preprocessor you can do these things. Phone number Wordlist Generator v.0.3. A very flexible phone number wordlist generator based on Python. Obviously, more than 30% users have their mobile phone numbers set as passwords. Sometimes you need to get a phone-numbers based wordlist for chosen region, but you have a.

Extracting numbers from text strings, removing unwanted characters , comp.lang.tcl, 2002-06-23
a delightful explication by Michael Cleverly
re_syntax
URI detector for arbitrary text as a regular expression
Arts and crafts of Tcl-Tk programming
Regular Expressions
Regular Expression Debugging Tips
Visual Regexp
A terrific way to learn about REs.

Generate 8 Digit Password

Redet
Another tool for learning about and working with REs.
Regular Expression Debugging Tips
More tools.

Simple regexp Examples

regexp has syntax:

regexp ?switches? exp string ?matchVar? ?subMatchVar subMatchVar ...?

If matchVar is specified, its value will be only the part of the string that was matched by the exp. As an example:

If any subMatchVars are specified, their values will be the part of the string that were matched by parenthesized bits in the exp, counting open parentheses from left to right. For example:

Many times, people only care about the subMatchVars and want to ignore matchVar. They use a 'dummy' variable as a placeholder in the command for the matchVar. You will often see things like

where ${->} holds the matched part. It is a sneaky but legal Tcl variable name.

PYK 2015-10-29: As a matter of fact, every string is a legal Tcl variable name.

Splitting a String Into Words

'How do I split an arbitrary string into words?' is a frequently asked question. If you use split $string { }, then multiple spaces will produce a list with empty elements. If you try to use foreach or lindex or some other list operation, then you must be sure that the string is a well-formed list. (Braces could cause problems.) So use a regular expression like this very simple shorthand for non-space characters:

You can even split a string of text with arbitrary spaces and special characters into a list of words by using the -inline and -all switches to regexp:

Split into Words, Respecting Acronyms

from Tcl Chatroom, 2013-10-09

Floating Point Number

This expression includes options for leading +/- character, digits, decimal points, and a trailing exponent. Note the use of nearly duplicate expressions joined with the or operator | to permit the decimal point to lead or follow digits.

Expression to find if a string have any substring maching a floating point number ( This was posted to comp.lang.tcl by Roland B. Roberts.):

More information (http://www.regular-expressions.info/floatingpoint.html )

Letters

Thanks to Brent Welch for these examples, showing the difference between a traditional character matching and 'the Unicode way.'

Only letters:

Only letters, the Unicode way:

Special Characters

Thanks again to Brent Welch for these two examples.

The set of Tcl special characters: ] [ $ { } :

The set of regular expression special characters: '] [ $ ^ ? + * ( ) | '

CHARACTERDESCRIPTION
*The sub-pattern before '*' can occur zero or more times
+The sub-pattern before '+' can occur one or more times
?The sub-pattern before '?' can only occur zero or one time
|(Alteration) Matches any one sub-pattern separated by '|'s. Similar to logical 'OR'.
()Groups a pattern
[]Defines a set of characters, or range of characters [a-z,A-Z,0-9]

I don't understand these examples. Why have [, ], and then the rest of the characters inside a [] - that just makes the string have [ and ] there twice, right?

LV: the first regular expression should be seen like this:

{ ... }
Protect the 9 inner characters.
[ ... ]
Define a set of characters to process.
]
If your set of characters is going to include the right bracket character ] as a specific matching character, then it needs to be first in the set/class definition.
[${}
More individual characters.
Doubled because when regexp goes to evaluate the characters, it would otherwise treat a single backslash as a request to quote the next character, the ending right bracket of the set/class.

The second regular expression is interpreted in a similar fashion. There are more characters because there are more metacharacters.

Also, not all characters are there - where are the period, equals, bang (exclamation sign), dash, colon, alphas that are a part of character entry escapes or classes, 0, hash/pound sign, and angle brackets (< and >)? These special characters all have meta meanings within regular expressions...

LV: Apparently no one has come along and updated the above expression to cover these.

Example posted by KC:

A set containing both angle brackets:

newline/carriage return

Could someone replace this line with some verbiage regarding the way one uses regular expressions for specific newline-carriage return handling (as opposed to the use of the $ metacharacter)?

Janos Holanyi: I would really need to build up a re that would match one line and only one line - that is, excluding carriage-return-newline's (rn) from matching... How would such a re look like?

LV: how about something like this?

If you want to keep carriage returns or newlines by themselves, but not when they are together, you need something like:

This allows plain carriage return or plain newline.

Thanks to bbh and Donal Fellows for this regular expression.

Back References

From comp.lang.tcl:

I did some experimenting with other strings, like 'just a HHHHEEEEAAAADDDDEEEERRRR'. The regular expression (.)111 does the job I would have wanted, whereas (.){4} will return the last of each four characters - as posted as well.

That surprised me too -- being able to place backreferences within the regex is an extremely powerful technique.

for exactly 4 char repeats, and (.)1+ for arbitrary repeats.

Whitespace After a Newline

PYK 2019-02-21: How does one capture any whitespace followed by a newline, except for newlines? The key is to use a negative lookahead to match empty space not followed by a newline. That bears repeating: Parenthesis are used to isolate the negative lookahead so that what matches immediately prior is the empty string:

This mechanism is effectively an and not operator.

bll 2019-02-21: I find:

much easier.

PYK 2019-02-21: That picks up much more than whitespace, so not quite the same thing.

IP Numbers

You can create a regular expression to check an IP address for correct syntax. Note that this regular expression only checks for groups of 1-3 digits separated by periods. If you want to ensure that the digit groups are from 0-255, or that you have a valid IP address, you'll have to do additional (non regexp) work. This code posted to comp.lang.tcl by George Peter Staplin

Digit

The above regular expression matches any string where there are four groups of 1-3 digits separated by periods. Since it's not anchored to the start and end of the string (with ^ and $) it will match any string that contains four groups of 1-3 digits separated by periods, such as: '66.70.7.154.9'.

If you don't mind a longer regexp, there is no reason you can't ensure that each group of 1-3 digits is in the range of 0-255. For example (broken up a bit to make it more readable):

recently on comp.lang.tcl, someone mentioned that http://www.oreilly.com/catalog/regex/chapter/ch04.html#Be_Specific talks about matching IP addresses.

Gururajesh: A Perfect regular expression to validate ip address with a single expression.

For 245.254.253.2, output is 245.254.253.2

For 265.254.243.2, output is none, As ip-address can`t have a number greater than 255.

Lars H: Perfect? No, it looks like it would accept 99a99b99c99, since . will match any character. Also, it can be shortened significantly by making use of {4} and the like (see Regular expressions).

Better is

Tcllib should be useful

freethomas: I thinks this regexp is much simple and easier for IP number

AMG: This expression allows any character to separate the octets, not just period. I sincerely doubt this is what you want. Use . instead of D. Also it's not anchored with ^ and $, so it works on substrings rather than requiring that the whole string match. Though maybe this is what you want since you explicitly capture the matching substring.

I already fixed the syntax issue of saying { at the beginning but leaving out the closing }, also of leaving out the first (.

I see no reason to use ( and ) grouping. You don't give variables into which the subexpressions would be captured, and it's pointless to capture the dots between the octets. (See what I did there?) Try this:

AMG: Here's a very similar script (to Lars H's contribution) that uses scan instead of regexp. It's much more readable, in my opinion.

There are a few differences. One, the trailing dot is omitted from the first three output variables (which I call a, b, c, d instead of v1, v2, v3, v4). Two, leading zeroes are permitted and discarded. Three, -0 is accepted as 0. Four, garbage at the end of $string is silently discarded. Five, each octet can have a leading +, e.g. +255.+255.+255.+255. Six, it's OVER FIVE TIMES FASTER! On this machine, my version using scan takes 15 microseconds, whereas your version using regexp takes 78 microseconds. Use time to measure performance. (I replaced puts with return when testing.)

Now, here's a hybrid version that uses regexp.

This version takes 46 microseconds to execute. It doesn't accept leading + or -. It rejects garbage at the end of the string. It treats the octets as octal if they are given leading zeroes, and invalid octal is always accepted. The reason for this last is because if treats strings containing invalid octal as nonnumeric text, so the <= operator is used to sort text rather than compare numbers. Corrected version:

This version takes 47 microseconds and it rejects invalid octal. However, it still interprets numbers as octal if leading zeroes are given, so 0377.255.255.255 is accepted (but 0400.255.255.255 is rejected). To fix this, it would be necessary to make a pattern that rejects leading zeroes unless the octet is exactly zero, something like: (0|[^1-9]d*). But this is getting clumsy and slow; I prefer the scan solution. regexp: not always the right tool!

Gururajesh:

This will be ok... for above mentioned issue.

AMG: Why call scan four times? A single invocation can do the job:

I don't see any drawbacks to this approach. The regular expression is simple and is used only to reject + and - signs and garbage at the end, scan does the job of splitting and converting to integers, and math expressions check ranges. Three tools, each doing what they're designed for.

CJB: Here is a pure regexp version with comparable performance. It matches any valid ip, rejecting octals. However it does not split the integers and is therefore only useful for validation. The timings on my computer were about 22 microseconds for this version compared to 28 microseconds for the regexp/scan combo (I removed the puts statements for the comparison because they are slow and tend to vary).

Note that the pure scan version is still fastest (about 20 microseconds), splits, and has the same rejections (%d stores integers and ignores extra leading 0 characters).

fh 2012-02-13 11:54:30:

To search IP ADDRESS using Regular Expression

Domain names

(First shot)

This code does NOT attempt, obviously, to ensure that the last level of the regular expression matches a known domain...

Regular Expression for parsing http string

the above author should remember this is a Tcl wiki, and not an aolserver one, but thanks for the submission ;)

PYK 2016-02-28: In the previous edit, a - character was added to the regular expression, prohibiting the occurrence of - in scheme component of a URL. As far as I can tell, - is allowed in the scheme component, so I've reverted that change in the expression above.

E-mail addresses

RS: No warranty, just a first shot:

Understand that this expression is an attempt to see if a string has a format that is compatible with normal RFC SMTP email address formats. It does not attempt to see whether the email address is correct. Also, it does not account for comments embedded within email addresses, which are defined even though seldom used.

bll 2017-6-30 E-mail addresses are quite complicated. You must be careful not to reject valid e-mail addresses. For example, % and + characters are valid. Nobody uses the % sign any more as it is not secure. The + character is very useful, but unfortunately, there are a lot of incorrect e-mail validation routines that reject it.

The following pattern will still reject an e-mail of the form [email protected][ip-address]. No lengths are checked. It does not check that the top-level domain (e.g. .org, .com, .solutions) is valid.

Reference: https://en.wikipedia.org/wiki/Email_address#Examples

XML-like data

To match something similar to XML-tags you can use regular-expressions, too. Let's assume we have this text:

We can match the body of bo with this regexp:

Now we extend our XML-text with some attributes for the tags, say:

If we try to match this with:

it won't work anymore. This is because s+ is greedy (in contrary to the non-greedy (.+?) and (.*?)) and that (the one greedy-operator) makes the whole expression greedy.

See Henry Spencer's reply in tcl 8.2 regexp not doing non-greedy matching correctly , comp.lang.tcl, 1999-09-20.

The correct way is:

Now we can write a more general XML-to-whatever-translater like this:

  1. Substitute [ and ] with their corresponding [ and ] to avoid confusion with subst in 3.
  2. Substitute the tags and attributes with commands
  3. Do a subst on the whole text, thereby calling the inserted commands

Call the parser with:

You have to be careful, though. Don't do this for large texts or texts with many nested xml-tags because the regular-expression-machine is not the the right tool to parse large,nested files efficiently. (Stefan Vogel)

DKF: I agree with that last point. If you are really dealing with XML, it is better to use a proper tool like TclDOM or tDOM.

PYK 2015-10-30: I patched the regular expression to fix an issue where the attributes group could pick up part of the tag in documents containing tags with similar prefixes. The fix is to use whitespace followed by non-whitespace other than > to detect the beginning of attributes. There are other things

Negated string

Bruce Hartweg wrote in comp.lang.tcl: You can't negate a regular expression, but you CAN negate a regular expression that is only a simple string. Logically, it's the following:

  • match any single char except first letter in the string.
  • match the first char in string if followed by any letter except the 2nd
  • match the first two if followed by any but the third, et cetera

Then the only thing more is to allow a partial match of the string at end of line. So for a regexp that matches

Free

The following proc will build the expression for any given string

Donal Fellows followed up with:

That's just set me thinking; you can do this by specifying that the whole string must be either not the character of the antimatch*, or the first character of the antimatch so long as it is not followed by the rest of the antimatch. This leads to a fairly simply expressed pattern.

In fact, this allows us to strengthen what you say above to allow the matching of any negated regular expression directly so long as the first component of the antimatch is a literal, and the rest of the antimatch is expressible in an ERE lookahead constraint (which imposes a number of restrictions, but still allows for some fairly sophisticated patterns.)

* Anything's better than overloading 'string' here!

JMN 2005-12-22: Could someone please explain what is meant by a 'negated string' here? Specifically - what do the above achieve that isn't satisfied by the simpler:

Doesn't the following snippet from the regexp manpage indicate that a regexp can be negated? where does(or did?) the 'simple string' requirement come in? - is this info no longer current?

Lars H: It indeed seems the entire problem is rather trivial. In Tcl 7 (before AREs) one sometimes had to do funny tricks like the ones Bruce Hartweg performs above, but his use of {0,2} means he must be assuming AREs. Perhaps there was a transitory period where one was available but not the other.

Oleg 2009-12-11: If one needs to match any string but 'foo', then the following will do the work:

And in general case when one needs to match any string that is neither 'foo' nor 'bar', then the following will do the work:

CRML 2013-11-06 In general case when one needs to match any string that is neither 'foo' nor 'bar' might be done using:

AMG: Oleg's regexps confuse me. Translated literally, I read them as 'match any string that does not begin with foo (or bar) unless that string has more characters after the foo (or bar).' Very indirect, I must say. CRML's suggestion I like better, though I would drop the extra parentheses to obtain: ^(?!(foo|bar)$). This says, 'match any string that does not begin with either foo or bar when immediately followed by end of string.' In other words, 'match any string that is not exactly foo or bar.'

Turn a string into %hex-escaped (url encoded) characters:

e.g. Csan -> %43%73%61%6E

This demonstrates the power of using regsub together with subst, which is regarded as one of the most powerful ways to use regular expressions in Tcl.

Turn a string into %hex-escaped (url encoded) characters (part 2)

This one makes the result more readable and still quite safe to use in URLs e.g. https://wiki.tcl-lang.org -> http%3A%2F%2Fwiki%2Etcl%2Etk

The inverse of the above (not optimized):

Caveats about using regsub with subst

glennj 2008-12-16: It can be dangerous to blindly apply subst to the results of regsub, particularly if you have not validated the input string. Here's an example that's not too contrived:

This results in invalid command name 'Some'. What if $string was [exec format c:]?

See DKF's 'proc regsub-eval' contribution in regsub to properly prepare the input string for substitution. Paraphrased:

which results in what you'd expect: the string '[Some Malicious Command]'

APN I don't follow why all the extra are needed in the string map. The following should work just as well?

PYK 2016-05-28: Indeed:

Maintain proper spacing when formatting for HTML

DG got this from Kevin Kenny on c.l.t.

And the output is:

Tabs require replacement, too:

glennj: Taken from comp.lang.perl.misc, transform variable names into StudlyCapsNames:

When using ASED's syntax checker you get an error of you don't use the -- option to regexp. Instead of regexp {([^A-Za-z0-9_-])} $string you have to write regexp -- {([^A-Za-z0-9_-])} $string

LV: A user recently asked:

I have a string that I'm trying to parse. Why doesn't this seem to work?

It looks to me like the *? causes the subsequent d+ to also be non-greedy and only match the first hit. Did I figure that out correctly? I presume that we currently don't have a way to turn off the greediness item?

Of course, in this simplified problem, one could just drop the greediness and code

I'll let the user decide if that suffices.

PYK 2019-08-15: See 'greediness' at Regular Expressions. In short the greediness of every quantifier is the greediness of its branch, regardless of the default preference of the quantifier. A branch, in turn, picks up its greediness from the first quantifier.

How do you select from two words?

LES: You got the regexp syntax wrong and tried to match the regular expression with the string 'match'. There is no 'zzz' variable (the actual match variable in your code) because your regular expression does not match the string 'match'. Try this:

Note that I could have dropped the 'zzz' variable, but left it there as a second match variable, as an exercise to you. You should understand why and what it does if you read the regexp page and assimilate the syntax.

Infinite spaces at start and end

RUJ: Could you match the following pattern of following string: infinite spaces at start and end.

LV: try

which should have a value of 1 (in other words, it matched). Of course, if those leading and trailing spaces are optional, then change the + to a *.

CRML non greedy or greedy does not give the same result. In the previous example, the .* matches all the string up to the last but one char.

URL Parser

See URL Parser.

Match a 'quoted string'

AMG: Adapted from Wibble:

This recognizes strings starting and ending with double quote characters. Any character can be embedded in the string, even double quotes, when preceded by an odd number of backslashes.

Word Splitting, Respecting Quoted Strings

given some text, e.g.

how to parse it into

see KBK, #tcl irc channel, 2012-12-02

split a string into n-length substrings

evilotto, #tcl, 2013-02-07

At Least 1 Alpha Character Interspersed with 0 or More Digits

Matching a group of strings

We can match a group of strings or subjects in a single regular expression

Sqlite Numeric Literal

ak - 2017-08-08 03:32:33

Regarding negation of regular expressions.

While the regular expression syntax does not allow for simple negation the underlying formalism of (non)deterministic finite automata does. Simply swap final and non-final states to negate, i.e. complement it.

See for example the grammar::fa package in Tcllib, which provides a complement method. It is implemented in the operations package. As are methods to convert from and to regular expressions.

Category TutorialCategory String Processing

Think your password is secure enough?

You may want to think again. In 2014, nearly half of Americans had their personal info exposed by hackers – and that doesn’t even count the many companies that experienced breaches.

And with more and more businesses storing their information in the cloud and using SaaS solutions like business intelligence and hr software platforms, keeping your information safe becomes even more important.

Selecting an obscure and complex password and changing it frequently can spell the difference between keeping your data secure and having your personal information stolen. We’ve gathered insights and advice to empower you to tighten up your online security – and keep hackers out of your personal business.

To get started, we set out to discover just how quickly a seasoned cracker could “brute-force” various types of passwords (systematically check combinations until finding the correct one) based on factors such as length and character types. We also created an interactive feature that lets you estimate how long it would take someone to crack a password now compared with how long it took in the past. If you come up with an idea for a potential password, our tester can tell you just how secure it is. Just how many days, weeks, or years worth of security an extra letter or symbol make? How does password strength change over time? The answers just might surprise you.

How strong is a typical password now – and how strong was it in the 1980s? Enter a word (not your current password) and drag the slider to select a year to find out how long it would take for someone to crack the term if it were your password. It could take anywhere from infinite time to a millennium to mere fractions of a millisecond.

You can turn the “word list” function on or off as you test passwords. This tool works by cycling through a word list containing common words and passwords and then evaluating other factors such as character types. If you enter a password not on the word list, the cracking time will not be affected. But if your password is on the word list, it greatly affects cracking time.
Note: The interactive tool is for educational purposes only. Although it does not collect or store your passwords, you should avoid using your current password.

10 Digit Numbers Wordlist

How long should your password be?

Digit

When it comes to passwords, one thing is certain: Size matters. Adding a single character to a password boosts its security exponentially. In a so-called “dictionary attack,” a password cracker will utilize a word list of common passwords to discern the right one. The list above shows the difference that adding characters can make when it comes to security.

For instance, if you have an extremely simple and common password that’s seven characters long (“abcdefg”), a pro could crack it in a fraction of a millisecond. Add just one more character (“abcdefgh”) and that time increases to five hours. Nine-character passwords take five days to break, 10-character words take four months, and 11-character passwords take 10 years. Make it up to 12 characters, and you’re looking at 200 years’ worth of security – not bad for one little letter.

alpha and numberic characters

Combining numbers and letters rather than sticking with one type of character dramatically enhances password security. A string of nine letters or numbers takes milliseconds to crack. Add a single letter, and your password may become cryptic enough to thwart password crackers for nearly four decades.

However, it’s not as simple as swapping your “e” for a “3” or adding a number at the end of a string of letters. Password attacking methods actually take advantage of those common habits. Your best bet is to simply make your password less predictable and more complicated.

asci, lowercase, and numeric characters

Combining several types of characters is an extremely effective way to make your password more cryptic. A simple, common word can be cracked in fractions of a millisecond. Inject a mix of lowercase and uppercase letters, numbers, and symbols (think @, %, and #), and your password can be secure for more than a decade.

password strength over time

Not every security issue comes down to password character types and length – time is also a major factor. Over the years, passwords weaken dramatically as technologies evolve and hackers become increasingly proficient. For example, a password that would take over three years to crack in 2000 takes just over a year to crack by 2004. Five years later, in 2009, the cracking time drops to four months. By 2016, the same password could be decoded in just over two months. This demonstrates the importance of changing passwords frequently.

what if you get hacked?

One morning, you open your email, and everything has gone haywire: Friends are chatting you to say they’ve received spam from your address. Your login history looks odd. You have a pile of bounce-back messages in your inbox and a bunch of strange messages in your sent box. You’ve been hacked – so what should you do?

First, recover your email account, and change your password (use our guidelines to formulate a strong one). Complete all the steps, such as changing security questions and setting up phone notifications. Because email is filled with personal information, you should also notify your bank, PayPal, online stores, and any other accounts to discern whether a breach has occurred. Be sure to change other passwords as well. Finally, notify your contacts in case emails sent from your account have compromised their information too. While not getting hacked at all is the best-case scenario, promptly taking these steps can make the best of a bad situation.

Protect yourself

As time goes on, it only becomes more likely that your password will be hacked – putting your most personal information at risk. By taking a few steps to enhance your password, you can exponentially minimize the risk of a breach. When it comes to passwords, size trumps all else – so choose one that’s at least 16 characters. And be sure to choose a mix of character types (numbers, uppercase and lowercase letters, and symbols) to further enhance its security.

What else can you do? Steer clear of words found in the dictionary, pronouns, usernames, and other predefined terms, as well as commonly used passwords – the top two in 2015 were “123456” and “password” (yes, you read that right). Also, never use the same password in different places (that forgotten account at a site you never use could lead to a bank account breach). Consider using a password generator in order to get a complex password with no discernible pattern to help thwart password crackers. Finally, if memorizing long strings of characters proves too taxing, consider adopting a password manager that stores all your passwords. No password is perfect, but taking these steps can go a long way toward security and peace of mind.

Methodology

Using processor data collected from Intel and John the Ripper benchmarks, we calculated keys per second (number of password keys attempted per second in a brute-force attack) of typical personal computers from 1982 to today.

The results from our interactive feature may differ from those of other online password-testing tools due to factors such as different equations, processors, and word lists.

Our data are based on the following equations:

Number of possible character combinations:

(Password Type)^(Password Length)

Password Type is the number of possible characters.

Effective Cores:

1/((1-Efficiency Constant)+(Efficiency Constant/Processor Cores)) The Efficiency Constant we used is 0.99, and we assume that 99% of the processor’s operations can be dedicated to the password crack.

Processor GFLOPS:

Processor Frequency * Effective Cores

Keys Per Second:

GFLOPS/Encryption Constant (gathered and calculated from John the Ripper benchmarks).

Dictionary Password Txt File Download

Time in seconds:

10 Digit Numeric Wordlist Examples

Seconds = Combinations/KeysPerSecond

Wordlists For Openbullet

Sources

Fair Use

Feel free to share the images and interactive found on this page freely. When doing so, please attribute the authors by providing a link back to this page and Better Buys, so your readers can learn more about this project and the related research.