Oddly enough, I do ZERO auto-classification. I’m kind of obsessive about making sure everything is categorized right. but your project sounds super interesting - perhaps with enough AI, I could get rid of my obsession!
You can try my solution - I my documentation is clear enough for non-technical users.
Thanks for this!
I use Google Colab to run Python. Here’s a modified version (thanks ChatGPT!) that will prompt you to upload the file and spit out the revised file for you to download.
# Amazon Order History fixer for Google Colab
# Upload the new Amazon CSV export, convert it back to the old format expected by Tiller,
# then download the fixed CSV.
import csv
from pathlib import Path
from google.colab import files
# =========================
# CONFIG
# =========================
# Column renames: new name -> old name
RENAMES = {
"Total Amount": "Total Owed",
"Original Quantity": "Quantity",
"Payment Method Type": "Payment Instrument Type",
"Gift Recipient Contact": "Gift Recipient Contact Details",
}
# Old column order (what Tiller expects)
OLD_ORDER = [
"Website",
"Order ID",
"Order Date",
"Purchase Order Number",
"Currency",
"Unit Price",
"Unit Price Tax",
"Shipping Charge",
"Total Discounts",
"Total Owed",
"Shipment Item Subtotal",
"Shipment Item Subtotal Tax",
"ASIN",
"Product Condition",
"Quantity",
"Payment Instrument Type",
"Order Status",
"Shipment Status",
"Ship Date",
"Shipping Option",
"Shipping Address",
"Billing Address",
"Carrier Name & Tracking Number",
"Product Name",
"Gift Message",
"Gift Sender Name",
"Gift Recipient Contact Details",
]
# =========================
# UPLOAD FILE
# =========================
print("Upload your Amazon Order History CSV file...")
uploaded = files.upload()
if not uploaded:
raise ValueError("No file was uploaded.")
input_filename = list(uploaded.keys())[0]
input_path = Path(input_filename)
if input_path.suffix.lower() != ".csv":
raise ValueError(f"Expected a CSV file, but got: {input_filename}")
output_filename = input_path.stem + " (fixed).csv"
output_path = Path(output_filename)
# =========================
# READ INPUT
# =========================
with open(input_path, newline="", encoding="utf-8-sig") as f:
reader = csv.DictReader(f)
rows = list(reader)
input_columns = reader.fieldnames or []
print(f"Read {len(rows)} rows from {input_filename}")
print("\nInput columns found:")
for col in input_columns:
print(f"- {col}")
# =========================
# APPLY RENAMES
# =========================
for row in rows:
for new_name, old_name in RENAMES.items():
if new_name in row:
row[old_name] = row.pop(new_name)
# =========================
# ENSURE ALL OLD COLUMNS EXIST
# =========================
for row in rows:
for col in OLD_ORDER:
if col not in row:
row[col] = ""
# =========================
# SORT BY ORDER DATE DESCENDING
# =========================
rows.sort(key=lambda r: r.get("Order Date", ""), reverse=True)
# =========================
# WRITE OUTPUT
# =========================
with open(output_path, "w", newline="", encoding="utf-8") as f:
writer = csv.DictWriter(
f,
fieldnames=OLD_ORDER,
quoting=csv.QUOTE_ALL,
extrasaction="ignore",
)
writer.writeheader()
writer.writerows(rows)
print(f"\nWrote {len(rows)} rows to {output_filename}")
# =========================
# DOWNLOAD OUTPUT
# =========================
files.download(str(output_path))
Hello Dave -
Do you have a recommendation to change the script so that there are no static values, but instead, it would import without those values and then I can pick those values from a drop box, as difference credit cards are setup in our account for use depending on the purchaser?
const TILLER_CONFIG = {
SHEET_NAME: “Transactions”,
COLUMNS: {
DATE: “Date”,
DESCRIPTION: “Description”,
AMOUNT: “Amount”,
TRANSACTION_ID: “Transaction ID”,
FULL_DESCRIPTION: “Full Description”,
DATE_ADDED: “Date Added”,
MONTH: “Month”,
WEEK: “Week”,
ACCOUNT: “Account”,
ACCOUNT_NUMBER: “Account #”,
INSTITUTION: “Institution”,
ACCOUNT_ID: “Account ID”,
METADATA: “Metadata”
},
STATIC_VALUES: {
ACCOUNT: “Chase Amazon Rewards Card”,
ACCOUNT_NUMBER: “xxxx1234”,
INSTITUTION: “Chase”,
ACCOUNT_ID: “5ded4c23c22997003f4a4572”
I tried to just take out those lines, but could never get the script to execute to completion.
Any help would be appreciated. No biggie though. Thanks for your work on this.
David
Sure, just change the values to ““ (that is two standard quotes with nothing between them)
How difficult would it be to alter the script to bring in the Payment Method Type to a new column created to the far right of the Transactions spreadsheet? Any guidance as to where in the script I might undertake such a task? It’s like I can follow along to some extent, then become concerned I’ll break it or I’ll be tinkering the wrong place altogether.
I thought that might be your next question given your last question. What about this: I update it to import the payment type into the account column and then you replace it with the correct value? this avoids the problem of adding a permanent new column which is used only for Amazon transactions.
Another idea is to do with the original script did, which just take the extra data from the Amazon transaction and stick it all in the existing metadata column. And as I think about it, this probably makes more sense because there might be other information in that file that people want.
I am traveling right now so any update updates will be toward the end of next week. how does that sound?
@daveahlers Thank you so much for working on this!
One ask: Would it be possible to do a day selector for the lookback vs. months? A finer grain could help against not importing items already processed.
Here’s why I didn’t do it that way:
- I’m suspicious of Amazon inserting a recently shipped transaction and using the original order date, creating a gap
- I also worry about timing of the exported data - if I request an export, and then new transactions appear for that same day but I submitted the request, my filter to exclude all transactions from that day prior will exclude these new transactions
As a result, I have come to depend on preventing duplicates. You can run the same import multiple times on the same file, and you will see that it doesn’t create any duplicates. Even for transactions created by the prior import process.
Under the old system, I always overlapped my import by a week, and to make it easier to go back many months for new users, I just switched to a month input. My code still has to read all 5000 transactions (they are no longer sorted by date in the export), so there’s zero impact on performance.
If you ever encounter a duplicate, let me know. They are easy to find and delete - same date, same amount, and same order ID but different import date/time. I did that while testing the duplicate detection code.
I dropped a new release to GitHub - daveinlosbarriles/TillerAmazonOrdersCSVImport · GitHub . A fair number of changes. Details are all in the readme, but in summary:
- It was too messy editing configuration values as constants in the appscript code, in particular dealing with multiple payment types. Now, when you first open the Import dialog, a new sheet is created called “AMZ Import”. Go to this sheet and enter in one row for each credit card you use with Amazon. And then enter in the appropriate account, account ID, and Institution ID values to use.
- There were other constants in the code that I also moved to the AMZ Import tab - but you normally won’t ever need to touch those. These are things like the actual name of all of the fields we need in the Amazon CSV, the name of the Transactions tab, Date column, Amount column, etc.
- One value that might be handy is that you can duplicate just your tiller Transactions tab, call it Test, and then on the AMZ Import tab you can set the transactions tab name to “Test”. Then you can play around with confirming for yourself that the amazon transactions are imported to your liking. Just don’t forget to change it back.
- I also took most of the remaining interesting Amazon transaction data and stuck it in the Metadata column, just like the old tool did. So number of items, price per item, taxes are there.
- Finally, after adding support for multiple payment types, I realized that the Offsetting entry to avoid double counting your amazon transactions should be broken out by credit card, instead of applied to just one credit card.
Go ahead and give it a try and let me know of any issues.
Randy,
Is Tillerhq any closer to updating the Amazon solution?
I really appreciate daveahlers effort to help, but I am not a programmer. I would even describe myself as technology-challenged! ![]()
Unless there is a simple solution, I might need to enter my Amazon purchases manually! This is too bad. I had used the Amazon Import in the past and really liked it. It was really a feature of Tillethq that differentiated it from other similar programs.
Your response and comment would be appreciated.
Thanks!
Pierre
I had been using the Tiller Community version of the Amazon order importer, and it always frustrated me that the offsetting transactions seldom matched the transactions on my credit card. I was left with unexplained amazon spending. There are a number of complexities, including returns, different shipment dates for items in the same order, and today I just discovered a new reason: Amazon digital orders - books, movies, software - are NOT included in the Amazon Order History.csv file. There’s a separate file called Digital Content Orders.csv. Mine has 2000 purchases in it (yes, I read a lot). I’ll take a look at the difficulty of supporting this file type for import.
We do not intend to fix the Tiller Community Solutions Amazon Import workflow.
We are discussing internally what the next steps are with better supporting (officially) an Amazon line item workflow, but any early test/access is likely months, not weeks, away.
I’ve started a new thread focusing on an updated import tool here: New Amazon import process
BTW, I did implement a date picker. GitHub - daveinlosbarriles/TillerTools: Provides a quick search sidebar on 5 popular fields in the Tiller Transactions sheet tab · GitHub