Is it possible for ArcMap to treat data types (numbers as text) correctly when importing from a text file?
I am creating a text/csv file (using a python script). The file contains data to import into ArcMap. I have several fields that look like numbers but have leading zeros ("0012") etc. that I want to preserve. I thought I could force this on ArcMap by adding a dummy line at the beginning of the file that has obvious text in it (like an "X").
In the screenshot below, I have an X in several of the fields at the beginning (including ACCTID in the first column) for the file but ArcMap has made them into integers anyway. To clarify, I'm just trying to add the file into ArcMap directly using "Add Data".
Excel seems to treat them similarly. I do not want to open this in Excel and fiddle with the formatting of over 100 fields if I can avoid it and it seems preposterous that I would have to do that. I found similar questions but nothing exactly like this. Some folks had the reverse issue (wanting numbers and getting text) and others were working with Excel files rather than plain text.
The code I'm using to write the CSV (if that matters) looks like this ...
with open(filehandle, 'wb') as csvfile:
data_writer = csv.writer(csvfile, delimiter = ",")
if printHeaders:
csvfile.write(', '.join(these_headers) + 'n')
if dummy:
csvfile.write(dummy + 'n')
further down I have ...
if linecount < linelimit and printline:
data_writer.writerow(theDict[acct])
linecount += 1
I am actually running a search cursor through this file to use to run against an update cursor using a key (the ACCTID).
arcgis-desktop python arcmap csv
add a comment |
I am creating a text/csv file (using a python script). The file contains data to import into ArcMap. I have several fields that look like numbers but have leading zeros ("0012") etc. that I want to preserve. I thought I could force this on ArcMap by adding a dummy line at the beginning of the file that has obvious text in it (like an "X").
In the screenshot below, I have an X in several of the fields at the beginning (including ACCTID in the first column) for the file but ArcMap has made them into integers anyway. To clarify, I'm just trying to add the file into ArcMap directly using "Add Data".
Excel seems to treat them similarly. I do not want to open this in Excel and fiddle with the formatting of over 100 fields if I can avoid it and it seems preposterous that I would have to do that. I found similar questions but nothing exactly like this. Some folks had the reverse issue (wanting numbers and getting text) and others were working with Excel files rather than plain text.
The code I'm using to write the CSV (if that matters) looks like this ...
with open(filehandle, 'wb') as csvfile:
data_writer = csv.writer(csvfile, delimiter = ",")
if printHeaders:
csvfile.write(', '.join(these_headers) + 'n')
if dummy:
csvfile.write(dummy + 'n')
further down I have ...
if linecount < linelimit and printline:
data_writer.writerow(theDict[acct])
linecount += 1
I am actually running a search cursor through this file to use to run against an update cursor using a key (the ACCTID).
arcgis-desktop python arcmap csv
Some of the field are numbers which you want to keep as numbers?
– BERA
9 hours ago
Yes some fields (about 10 of them out of 100) will need to stay numeric. I'm currently just leaving those blank in the dummy line.
– jbchurchill
9 hours ago
What are you going to do with the data once added correctly? Copy to a file geodatabase? Which ArcMap version do you have?
– BERA
8 hours ago
I'm using 10.6. I'm running a search cursor through the table and populating a dictionary to use to update the (parcels in this case) layer with transactional data.
– jbchurchill
8 hours ago
add a comment |
I am creating a text/csv file (using a python script). The file contains data to import into ArcMap. I have several fields that look like numbers but have leading zeros ("0012") etc. that I want to preserve. I thought I could force this on ArcMap by adding a dummy line at the beginning of the file that has obvious text in it (like an "X").
In the screenshot below, I have an X in several of the fields at the beginning (including ACCTID in the first column) for the file but ArcMap has made them into integers anyway. To clarify, I'm just trying to add the file into ArcMap directly using "Add Data".
Excel seems to treat them similarly. I do not want to open this in Excel and fiddle with the formatting of over 100 fields if I can avoid it and it seems preposterous that I would have to do that. I found similar questions but nothing exactly like this. Some folks had the reverse issue (wanting numbers and getting text) and others were working with Excel files rather than plain text.
The code I'm using to write the CSV (if that matters) looks like this ...
with open(filehandle, 'wb') as csvfile:
data_writer = csv.writer(csvfile, delimiter = ",")
if printHeaders:
csvfile.write(', '.join(these_headers) + 'n')
if dummy:
csvfile.write(dummy + 'n')
further down I have ...
if linecount < linelimit and printline:
data_writer.writerow(theDict[acct])
linecount += 1
I am actually running a search cursor through this file to use to run against an update cursor using a key (the ACCTID).
arcgis-desktop python arcmap csv
I am creating a text/csv file (using a python script). The file contains data to import into ArcMap. I have several fields that look like numbers but have leading zeros ("0012") etc. that I want to preserve. I thought I could force this on ArcMap by adding a dummy line at the beginning of the file that has obvious text in it (like an "X").
In the screenshot below, I have an X in several of the fields at the beginning (including ACCTID in the first column) for the file but ArcMap has made them into integers anyway. To clarify, I'm just trying to add the file into ArcMap directly using "Add Data".
Excel seems to treat them similarly. I do not want to open this in Excel and fiddle with the formatting of over 100 fields if I can avoid it and it seems preposterous that I would have to do that. I found similar questions but nothing exactly like this. Some folks had the reverse issue (wanting numbers and getting text) and others were working with Excel files rather than plain text.
The code I'm using to write the CSV (if that matters) looks like this ...
with open(filehandle, 'wb') as csvfile:
data_writer = csv.writer(csvfile, delimiter = ",")
if printHeaders:
csvfile.write(', '.join(these_headers) + 'n')
if dummy:
csvfile.write(dummy + 'n')
further down I have ...
if linecount < linelimit and printline:
data_writer.writerow(theDict[acct])
linecount += 1
I am actually running a search cursor through this file to use to run against an update cursor using a key (the ACCTID).
arcgis-desktop python arcmap csv
arcgis-desktop python arcmap csv
edited 8 hours ago
PhilippNagel
1,204312
1,204312
asked 9 hours ago
jbchurchilljbchurchill
3,8311337
3,8311337
Some of the field are numbers which you want to keep as numbers?
– BERA
9 hours ago
Yes some fields (about 10 of them out of 100) will need to stay numeric. I'm currently just leaving those blank in the dummy line.
– jbchurchill
9 hours ago
What are you going to do with the data once added correctly? Copy to a file geodatabase? Which ArcMap version do you have?
– BERA
8 hours ago
I'm using 10.6. I'm running a search cursor through the table and populating a dictionary to use to update the (parcels in this case) layer with transactional data.
– jbchurchill
8 hours ago
add a comment |
Some of the field are numbers which you want to keep as numbers?
– BERA
9 hours ago
Yes some fields (about 10 of them out of 100) will need to stay numeric. I'm currently just leaving those blank in the dummy line.
– jbchurchill
9 hours ago
What are you going to do with the data once added correctly? Copy to a file geodatabase? Which ArcMap version do you have?
– BERA
8 hours ago
I'm using 10.6. I'm running a search cursor through the table and populating a dictionary to use to update the (parcels in this case) layer with transactional data.
– jbchurchill
8 hours ago
Some of the field are numbers which you want to keep as numbers?
– BERA
9 hours ago
Some of the field are numbers which you want to keep as numbers?
– BERA
9 hours ago
Yes some fields (about 10 of them out of 100) will need to stay numeric. I'm currently just leaving those blank in the dummy line.
– jbchurchill
9 hours ago
Yes some fields (about 10 of them out of 100) will need to stay numeric. I'm currently just leaving those blank in the dummy line.
– jbchurchill
9 hours ago
What are you going to do with the data once added correctly? Copy to a file geodatabase? Which ArcMap version do you have?
– BERA
8 hours ago
What are you going to do with the data once added correctly? Copy to a file geodatabase? Which ArcMap version do you have?
– BERA
8 hours ago
I'm using 10.6. I'm running a search cursor through the table and populating a dictionary to use to update the (parcels in this case) layer with transactional data.
– jbchurchill
8 hours ago
I'm using 10.6. I'm running a search cursor through the table and populating a dictionary to use to update the (parcels in this case) layer with transactional data.
– jbchurchill
8 hours ago
add a comment |
3 Answers
3
active
oldest
votes
If you want those numbers to be treated as text, you need to quote them in the CSV. That means that you want to make sure your items are string types in Python already.
You don't say how you're writing the CSV from Python, but if you're just using the csv module, you'll want to use a quote option different from the default (default is QUOTE_MINIMAL). QUOTE_NONNUMERIC is probably what you would want to use here. The documentation for that can be found here.
Here is the example from the doc on how to use this for writing to CSV:
import csv
with open('eggs.csv', 'w', newline='') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=' ',
quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
spamwriter.writerow(['Spam'] * 5 + ['Baked Beans'])
spamwriter.writerow(['Spam', 'Lovely Spam', 'Wonderful Spam'])
So your code could be modified as follows:
with open(filehandle, 'wb') as csvfile:
data_writer = csv.writer(csvfile, delimiter = ",", quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
if printHeaders:
csvfile.write(', '.join(these_headers) + 'n')
if dummy:
csvfile.write(dummy + 'n')
Since you say you are going to run a SearchCursor through this file, why not just iterate through the data after you read it into your script, instead of first writing it to disk? That sounds like it may be easier.
I am using the csv module. I'm reading in a file that has set columns and field widths and turning the data that I'm interested in using into csv by just writing it into a file. I'll add my code. Maybe you can explain where I'd include that CSV.QUOTE_NONNUMERIC constant. Would I just declare that somewhere before the write operation (sounds easy enough)?
– jbchurchill
8 hours ago
I tried making that declaration but the fields that appear numeric (ACCTID always starts w/ 12 for example) come out with no quotes.
– jbchurchill
8 hours ago
1
I added some examples above. After you do that, open your output file in Notepad or similar and just see if the items you want quotes were in fact put in quotes now. If not, you will need to look at how you are reading them in and if they are being treated as string the whole time or not. May need to manually convert them in your script before writing.
– PhilippNagel
8 hours ago
I'm accepting this as the answer. I am going to have to rethink the overall method I'm using here but for now this does appear to allow me to get the data into a format that doesn't get converted to a number which works for unadulterated numbers and text.
– jbchurchill
3 hours ago
add a comment |
You could try to generate an schema.ini file along your CSV (in fact after the first import ArcGis probably automatically create it, hopefully you'll just have to modify it and re-import).
based on your data it could look like that :
[PDR_OUTPUT_1210.csv]
Format=Delimited(59)
ColNameHeader=True
Col1=ACCTID Text Width 254
Col2=JURSCODE Text Width 254
...
Colx=... Long
...
Col100=...
More information on schema.ini file
add a comment |
I recommend pandas module (which is included since appr. ArcGIS 10.4). I dont know how you want the dictionaries to be structured, but any way is possible when the data is in pandas. See pandas.DataFrame.to_dict documentation.
import pandas as pd
df = pd.read_csv(r"C:Test11111.txt",dtype={'field1':object}) #field1 is forced to be object(=string)
>>> df
field1 field2 field3
0 01 A 9
1 2 B 9
2 3 C 9
3 4 D 9
>>> df.dtypes
field1 object
field2 object
field3 int64
dtype: object
>>> df.set_index('field1').T.to_dict('list')
{'2': ['B', 9L], '3': ['C', 9L], '01': ['A', 9L], '4': ['D', 9L]}
1
it may take me some time but I will investigate this approach. I've used pandas but not specifically like this.
– jbchurchill
7 hours ago
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "79"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fgis.stackexchange.com%2fquestions%2f315371%2fis-it-possible-for-arcmap-to-treat-data-types-numbers-as-text-correctly-when-i%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
If you want those numbers to be treated as text, you need to quote them in the CSV. That means that you want to make sure your items are string types in Python already.
You don't say how you're writing the CSV from Python, but if you're just using the csv module, you'll want to use a quote option different from the default (default is QUOTE_MINIMAL). QUOTE_NONNUMERIC is probably what you would want to use here. The documentation for that can be found here.
Here is the example from the doc on how to use this for writing to CSV:
import csv
with open('eggs.csv', 'w', newline='') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=' ',
quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
spamwriter.writerow(['Spam'] * 5 + ['Baked Beans'])
spamwriter.writerow(['Spam', 'Lovely Spam', 'Wonderful Spam'])
So your code could be modified as follows:
with open(filehandle, 'wb') as csvfile:
data_writer = csv.writer(csvfile, delimiter = ",", quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
if printHeaders:
csvfile.write(', '.join(these_headers) + 'n')
if dummy:
csvfile.write(dummy + 'n')
Since you say you are going to run a SearchCursor through this file, why not just iterate through the data after you read it into your script, instead of first writing it to disk? That sounds like it may be easier.
I am using the csv module. I'm reading in a file that has set columns and field widths and turning the data that I'm interested in using into csv by just writing it into a file. I'll add my code. Maybe you can explain where I'd include that CSV.QUOTE_NONNUMERIC constant. Would I just declare that somewhere before the write operation (sounds easy enough)?
– jbchurchill
8 hours ago
I tried making that declaration but the fields that appear numeric (ACCTID always starts w/ 12 for example) come out with no quotes.
– jbchurchill
8 hours ago
1
I added some examples above. After you do that, open your output file in Notepad or similar and just see if the items you want quotes were in fact put in quotes now. If not, you will need to look at how you are reading them in and if they are being treated as string the whole time or not. May need to manually convert them in your script before writing.
– PhilippNagel
8 hours ago
I'm accepting this as the answer. I am going to have to rethink the overall method I'm using here but for now this does appear to allow me to get the data into a format that doesn't get converted to a number which works for unadulterated numbers and text.
– jbchurchill
3 hours ago
add a comment |
If you want those numbers to be treated as text, you need to quote them in the CSV. That means that you want to make sure your items are string types in Python already.
You don't say how you're writing the CSV from Python, but if you're just using the csv module, you'll want to use a quote option different from the default (default is QUOTE_MINIMAL). QUOTE_NONNUMERIC is probably what you would want to use here. The documentation for that can be found here.
Here is the example from the doc on how to use this for writing to CSV:
import csv
with open('eggs.csv', 'w', newline='') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=' ',
quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
spamwriter.writerow(['Spam'] * 5 + ['Baked Beans'])
spamwriter.writerow(['Spam', 'Lovely Spam', 'Wonderful Spam'])
So your code could be modified as follows:
with open(filehandle, 'wb') as csvfile:
data_writer = csv.writer(csvfile, delimiter = ",", quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
if printHeaders:
csvfile.write(', '.join(these_headers) + 'n')
if dummy:
csvfile.write(dummy + 'n')
Since you say you are going to run a SearchCursor through this file, why not just iterate through the data after you read it into your script, instead of first writing it to disk? That sounds like it may be easier.
I am using the csv module. I'm reading in a file that has set columns and field widths and turning the data that I'm interested in using into csv by just writing it into a file. I'll add my code. Maybe you can explain where I'd include that CSV.QUOTE_NONNUMERIC constant. Would I just declare that somewhere before the write operation (sounds easy enough)?
– jbchurchill
8 hours ago
I tried making that declaration but the fields that appear numeric (ACCTID always starts w/ 12 for example) come out with no quotes.
– jbchurchill
8 hours ago
1
I added some examples above. After you do that, open your output file in Notepad or similar and just see if the items you want quotes were in fact put in quotes now. If not, you will need to look at how you are reading them in and if they are being treated as string the whole time or not. May need to manually convert them in your script before writing.
– PhilippNagel
8 hours ago
I'm accepting this as the answer. I am going to have to rethink the overall method I'm using here but for now this does appear to allow me to get the data into a format that doesn't get converted to a number which works for unadulterated numbers and text.
– jbchurchill
3 hours ago
add a comment |
If you want those numbers to be treated as text, you need to quote them in the CSV. That means that you want to make sure your items are string types in Python already.
You don't say how you're writing the CSV from Python, but if you're just using the csv module, you'll want to use a quote option different from the default (default is QUOTE_MINIMAL). QUOTE_NONNUMERIC is probably what you would want to use here. The documentation for that can be found here.
Here is the example from the doc on how to use this for writing to CSV:
import csv
with open('eggs.csv', 'w', newline='') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=' ',
quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
spamwriter.writerow(['Spam'] * 5 + ['Baked Beans'])
spamwriter.writerow(['Spam', 'Lovely Spam', 'Wonderful Spam'])
So your code could be modified as follows:
with open(filehandle, 'wb') as csvfile:
data_writer = csv.writer(csvfile, delimiter = ",", quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
if printHeaders:
csvfile.write(', '.join(these_headers) + 'n')
if dummy:
csvfile.write(dummy + 'n')
Since you say you are going to run a SearchCursor through this file, why not just iterate through the data after you read it into your script, instead of first writing it to disk? That sounds like it may be easier.
If you want those numbers to be treated as text, you need to quote them in the CSV. That means that you want to make sure your items are string types in Python already.
You don't say how you're writing the CSV from Python, but if you're just using the csv module, you'll want to use a quote option different from the default (default is QUOTE_MINIMAL). QUOTE_NONNUMERIC is probably what you would want to use here. The documentation for that can be found here.
Here is the example from the doc on how to use this for writing to CSV:
import csv
with open('eggs.csv', 'w', newline='') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=' ',
quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
spamwriter.writerow(['Spam'] * 5 + ['Baked Beans'])
spamwriter.writerow(['Spam', 'Lovely Spam', 'Wonderful Spam'])
So your code could be modified as follows:
with open(filehandle, 'wb') as csvfile:
data_writer = csv.writer(csvfile, delimiter = ",", quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
if printHeaders:
csvfile.write(', '.join(these_headers) + 'n')
if dummy:
csvfile.write(dummy + 'n')
Since you say you are going to run a SearchCursor through this file, why not just iterate through the data after you read it into your script, instead of first writing it to disk? That sounds like it may be easier.
edited 8 hours ago
answered 9 hours ago
PhilippNagelPhilippNagel
1,204312
1,204312
I am using the csv module. I'm reading in a file that has set columns and field widths and turning the data that I'm interested in using into csv by just writing it into a file. I'll add my code. Maybe you can explain where I'd include that CSV.QUOTE_NONNUMERIC constant. Would I just declare that somewhere before the write operation (sounds easy enough)?
– jbchurchill
8 hours ago
I tried making that declaration but the fields that appear numeric (ACCTID always starts w/ 12 for example) come out with no quotes.
– jbchurchill
8 hours ago
1
I added some examples above. After you do that, open your output file in Notepad or similar and just see if the items you want quotes were in fact put in quotes now. If not, you will need to look at how you are reading them in and if they are being treated as string the whole time or not. May need to manually convert them in your script before writing.
– PhilippNagel
8 hours ago
I'm accepting this as the answer. I am going to have to rethink the overall method I'm using here but for now this does appear to allow me to get the data into a format that doesn't get converted to a number which works for unadulterated numbers and text.
– jbchurchill
3 hours ago
add a comment |
I am using the csv module. I'm reading in a file that has set columns and field widths and turning the data that I'm interested in using into csv by just writing it into a file. I'll add my code. Maybe you can explain where I'd include that CSV.QUOTE_NONNUMERIC constant. Would I just declare that somewhere before the write operation (sounds easy enough)?
– jbchurchill
8 hours ago
I tried making that declaration but the fields that appear numeric (ACCTID always starts w/ 12 for example) come out with no quotes.
– jbchurchill
8 hours ago
1
I added some examples above. After you do that, open your output file in Notepad or similar and just see if the items you want quotes were in fact put in quotes now. If not, you will need to look at how you are reading them in and if they are being treated as string the whole time or not. May need to manually convert them in your script before writing.
– PhilippNagel
8 hours ago
I'm accepting this as the answer. I am going to have to rethink the overall method I'm using here but for now this does appear to allow me to get the data into a format that doesn't get converted to a number which works for unadulterated numbers and text.
– jbchurchill
3 hours ago
I am using the csv module. I'm reading in a file that has set columns and field widths and turning the data that I'm interested in using into csv by just writing it into a file. I'll add my code. Maybe you can explain where I'd include that CSV.QUOTE_NONNUMERIC constant. Would I just declare that somewhere before the write operation (sounds easy enough)?
– jbchurchill
8 hours ago
I am using the csv module. I'm reading in a file that has set columns and field widths and turning the data that I'm interested in using into csv by just writing it into a file. I'll add my code. Maybe you can explain where I'd include that CSV.QUOTE_NONNUMERIC constant. Would I just declare that somewhere before the write operation (sounds easy enough)?
– jbchurchill
8 hours ago
I tried making that declaration but the fields that appear numeric (ACCTID always starts w/ 12 for example) come out with no quotes.
– jbchurchill
8 hours ago
I tried making that declaration but the fields that appear numeric (ACCTID always starts w/ 12 for example) come out with no quotes.
– jbchurchill
8 hours ago
1
1
I added some examples above. After you do that, open your output file in Notepad or similar and just see if the items you want quotes were in fact put in quotes now. If not, you will need to look at how you are reading them in and if they are being treated as string the whole time or not. May need to manually convert them in your script before writing.
– PhilippNagel
8 hours ago
I added some examples above. After you do that, open your output file in Notepad or similar and just see if the items you want quotes were in fact put in quotes now. If not, you will need to look at how you are reading them in and if they are being treated as string the whole time or not. May need to manually convert them in your script before writing.
– PhilippNagel
8 hours ago
I'm accepting this as the answer. I am going to have to rethink the overall method I'm using here but for now this does appear to allow me to get the data into a format that doesn't get converted to a number which works for unadulterated numbers and text.
– jbchurchill
3 hours ago
I'm accepting this as the answer. I am going to have to rethink the overall method I'm using here but for now this does appear to allow me to get the data into a format that doesn't get converted to a number which works for unadulterated numbers and text.
– jbchurchill
3 hours ago
add a comment |
You could try to generate an schema.ini file along your CSV (in fact after the first import ArcGis probably automatically create it, hopefully you'll just have to modify it and re-import).
based on your data it could look like that :
[PDR_OUTPUT_1210.csv]
Format=Delimited(59)
ColNameHeader=True
Col1=ACCTID Text Width 254
Col2=JURSCODE Text Width 254
...
Colx=... Long
...
Col100=...
More information on schema.ini file
add a comment |
You could try to generate an schema.ini file along your CSV (in fact after the first import ArcGis probably automatically create it, hopefully you'll just have to modify it and re-import).
based on your data it could look like that :
[PDR_OUTPUT_1210.csv]
Format=Delimited(59)
ColNameHeader=True
Col1=ACCTID Text Width 254
Col2=JURSCODE Text Width 254
...
Colx=... Long
...
Col100=...
More information on schema.ini file
add a comment |
You could try to generate an schema.ini file along your CSV (in fact after the first import ArcGis probably automatically create it, hopefully you'll just have to modify it and re-import).
based on your data it could look like that :
[PDR_OUTPUT_1210.csv]
Format=Delimited(59)
ColNameHeader=True
Col1=ACCTID Text Width 254
Col2=JURSCODE Text Width 254
...
Colx=... Long
...
Col100=...
More information on schema.ini file
You could try to generate an schema.ini file along your CSV (in fact after the first import ArcGis probably automatically create it, hopefully you'll just have to modify it and re-import).
based on your data it could look like that :
[PDR_OUTPUT_1210.csv]
Format=Delimited(59)
ColNameHeader=True
Col1=ACCTID Text Width 254
Col2=JURSCODE Text Width 254
...
Colx=... Long
...
Col100=...
More information on schema.ini file
answered 8 hours ago
J.RJ.R
3,442222
3,442222
add a comment |
add a comment |
I recommend pandas module (which is included since appr. ArcGIS 10.4). I dont know how you want the dictionaries to be structured, but any way is possible when the data is in pandas. See pandas.DataFrame.to_dict documentation.
import pandas as pd
df = pd.read_csv(r"C:Test11111.txt",dtype={'field1':object}) #field1 is forced to be object(=string)
>>> df
field1 field2 field3
0 01 A 9
1 2 B 9
2 3 C 9
3 4 D 9
>>> df.dtypes
field1 object
field2 object
field3 int64
dtype: object
>>> df.set_index('field1').T.to_dict('list')
{'2': ['B', 9L], '3': ['C', 9L], '01': ['A', 9L], '4': ['D', 9L]}
1
it may take me some time but I will investigate this approach. I've used pandas but not specifically like this.
– jbchurchill
7 hours ago
add a comment |
I recommend pandas module (which is included since appr. ArcGIS 10.4). I dont know how you want the dictionaries to be structured, but any way is possible when the data is in pandas. See pandas.DataFrame.to_dict documentation.
import pandas as pd
df = pd.read_csv(r"C:Test11111.txt",dtype={'field1':object}) #field1 is forced to be object(=string)
>>> df
field1 field2 field3
0 01 A 9
1 2 B 9
2 3 C 9
3 4 D 9
>>> df.dtypes
field1 object
field2 object
field3 int64
dtype: object
>>> df.set_index('field1').T.to_dict('list')
{'2': ['B', 9L], '3': ['C', 9L], '01': ['A', 9L], '4': ['D', 9L]}
1
it may take me some time but I will investigate this approach. I've used pandas but not specifically like this.
– jbchurchill
7 hours ago
add a comment |
I recommend pandas module (which is included since appr. ArcGIS 10.4). I dont know how you want the dictionaries to be structured, but any way is possible when the data is in pandas. See pandas.DataFrame.to_dict documentation.
import pandas as pd
df = pd.read_csv(r"C:Test11111.txt",dtype={'field1':object}) #field1 is forced to be object(=string)
>>> df
field1 field2 field3
0 01 A 9
1 2 B 9
2 3 C 9
3 4 D 9
>>> df.dtypes
field1 object
field2 object
field3 int64
dtype: object
>>> df.set_index('field1').T.to_dict('list')
{'2': ['B', 9L], '3': ['C', 9L], '01': ['A', 9L], '4': ['D', 9L]}
I recommend pandas module (which is included since appr. ArcGIS 10.4). I dont know how you want the dictionaries to be structured, but any way is possible when the data is in pandas. See pandas.DataFrame.to_dict documentation.
import pandas as pd
df = pd.read_csv(r"C:Test11111.txt",dtype={'field1':object}) #field1 is forced to be object(=string)
>>> df
field1 field2 field3
0 01 A 9
1 2 B 9
2 3 C 9
3 4 D 9
>>> df.dtypes
field1 object
field2 object
field3 int64
dtype: object
>>> df.set_index('field1').T.to_dict('list')
{'2': ['B', 9L], '3': ['C', 9L], '01': ['A', 9L], '4': ['D', 9L]}
answered 8 hours ago
BERABERA
16.5k52043
16.5k52043
1
it may take me some time but I will investigate this approach. I've used pandas but not specifically like this.
– jbchurchill
7 hours ago
add a comment |
1
it may take me some time but I will investigate this approach. I've used pandas but not specifically like this.
– jbchurchill
7 hours ago
1
1
it may take me some time but I will investigate this approach. I've used pandas but not specifically like this.
– jbchurchill
7 hours ago
it may take me some time but I will investigate this approach. I've used pandas but not specifically like this.
– jbchurchill
7 hours ago
add a comment |
Thanks for contributing an answer to Geographic Information Systems Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fgis.stackexchange.com%2fquestions%2f315371%2fis-it-possible-for-arcmap-to-treat-data-types-numbers-as-text-correctly-when-i%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Some of the field are numbers which you want to keep as numbers?
– BERA
9 hours ago
Yes some fields (about 10 of them out of 100) will need to stay numeric. I'm currently just leaving those blank in the dummy line.
– jbchurchill
9 hours ago
What are you going to do with the data once added correctly? Copy to a file geodatabase? Which ArcMap version do you have?
– BERA
8 hours ago
I'm using 10.6. I'm running a search cursor through the table and populating a dictionary to use to update the (parcels in this case) layer with transactional data.
– jbchurchill
8 hours ago