Conversion Examples
Infix | Postfix | |
Infix | Postfix |
4 + 2 + 7 | 4 2 + 7 + |
|
4 + 2 - 7 | 4 2 + 7 - + |
4 - 2 + 7 | 4 2 - 7 + |
|
4 + 2 / 7 | 4 2 7 / + |
4 / 2 + 7 | 4 2 / 7 + |
|
4 * 2 / 7 | 4 2 * 7 / |
4 / 2 * 7 | 4 2 / 7 * |
|
4 + 2 * 7 | 4 2 7 * + |
4 + 2 / 7 | 4 2 7 / + |
|
4 * 2 + 7 | 4 2 * 7 + |
( 4 + 2 ) * 7 | 4 2 + 7 * |
|
4 + ( 2 * 7 ) | 4 2 7 * + |
Operators Precedence
Highest precedence at the top, lowest at the bottom.
Same precedence operators processed left to right. |
Operator | Description |
( ) | parenthesis (sub-expression) |
** | power |
+, - | unary operators1 |
*, /, //, % | multiplication, division,
floor division, modulus |
+, - | addition, subtraction |
1A unary operator is one that takes a single operand/argument and
performs an operation.
Modulus/Remainder Operator
13 % 2 | 1 |
-13 % 2 | 1 |
13 % -2 | -1 |
-13 % -2 | -1 |
13.4 % 2 | 1.4000000000000004 |
13.4 % 2.1 | 0.7999999999999998 |
-13.4 % 2.1 | 1.3000000000000003 |
-13.4 % -2.1 | -0.7999999999999998 |
Note:
A remainder is the least positive integer that should be
subtracted from a to make it divisible by b.
Tokenizer
As an exercise for the student, modify the code to recognize:
- .123
- -.123
- -.123E6
- -.123E-6
#!/usr/bin/python3
# ====================================================================
# tokenize a numerical expression string
# return a list of tokens (strings)
#
# Notes: this code does not check for expression syntax errors.
# that should be done elsewhere.
# ====================================================================
import re
import copy
import user_interface as ui
TOKFLAG = False
# -----------------------------------------------------------------
# ---- compile regx
# ----
# ---- regx to find tokens:
# ---- 99.99, 99, **, //, (, ), *, /, +, -, %, unary-, unary+
# ----
# ---- search order is important
# ---- (long token patterns followed by shorter patterns)
# -----------------------------------------------------------------
# ---- format: regx, name, token type/precedence
REGXLIST = [
(r'[+-]?\d+(\.\d*)?', 'number', 0),
(r'\*\*', 'power', 3),
(r'\/\/', 'floor', 4),
(r'\(', 'sub-start', 1),
(r'\)', 'sub-end', 2),
(r'\*', 'multiply', 4),
(r'\/', 'divide', 4),
(r'%', 'remainder', 4),
(r'\+', 'add', 5),
(r'-', 'subtract', 5)
]
REGXPATTERNS = []
for pat in REGXLIST:
REGXPATTERNS.append((re.compile(pat[0]), pat[1], pat[2]))
# --------------------------------------------------------------------
# ---- return the token at the start of the string
# --------------------------------------------------------------------
def _get_token(string):
for pat in REGXPATTERNS:
res = re.match(pat[0],string)
if res is not None:
return(True,res,pat)
print('no regx match found')
return (False,res,None)
# --------------------------------------------------------------------
# ---- convert an infix expression string to a list of tokens
# --------------------------------------------------------------------
def convert_to_tokens(infix_string):
tokens = []
# ---- tokenize the string
while len(infix_string) > 0:
tf,res,pat = _get_token(infix_string)
# ---- found token match?
if not tf:
return (False,tokens)
# ---- extract info from search results
ln = len(infix_string)
end = res.end()
start = res.start()
##print(f'res = {res}')
##print(f'ln = {len(infix_string)}')
##print(f'end = {res.end()}')
##print(f'start = {res.start()}')
# ---- add the token to the list of tokens
# ---- include precedence
token = infix_string[:end]
if TOKFLAG:
x = (token,pat[2])
print(f'tokenizer found: {x}')
tokens.append((token,pat[2]))
# ---- skip the stuff we have already processed
# ---- and any leading/trailing blanks
infix_string = infix_string[end:].strip()
# ---- return the tokens we have found
return (True,tokens)
# --------------------------------------------------------------------
# ---- display list of tokens
# --------------------------------------------------------------------
def display_token_list(tokens:list,title:str=None,indexed:bool=False):
if title is not None: print(title)
if indexed:
for i,tok in enumerate(tokens):
print(f'[{i:2}] {tok[0]:<9} {tok[1]} {tok[2]}')
else:
print(' '.join([tok[0] for tok in tokens]))
Links
Convert Infix expression to Postfix expression
Online Converter - Infix to Postfix or Prefix
Using stacks: parsing of arithmetic expressions (pdf)
Miscellaneous - Interesting But Not Required
Backus–Naur form (BNF)
Integer State Diagram
Float State Diagram
Python Tokenize Module