502 Error for Large files
I am trying to load a very large file using a library partially built by Jesse Wilson.
I am using the attached code, but am receiving the following error after about 40 minutes. For smaller files (files that take less than 30 minutes, everything works fine)
502 Server Error: Bad Gateway for url: https://api.anaplan.com/2/0/workspaces/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/models/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/files/xxxxxxxxxxxx/chunks/3
This is after receiving these messages:
xxxxxxxxxxxxxxxxxxxxxxx.pem
<class 'str'>
Token validated
Token validated
None
Uploading chunk 1, Status: 204
Uploading chunk 1, Status: 204
Token validated
Token validated
None
Uploading chunk 2, Status: 204
Uploading chunk 2, Status: 204
Token validated
Token validated
None
Uploading chunk 3, Status: 204
Uploading chunk 3, Status: 204
Token validated
Token validated
I don't understand why it is giving me a 502 error in the same spot. I can manually upload the file in anaplan. I am re-authenticating, and then testing my authentication. It should work.
Because the connection times out after 30 minutes, this makes using APIs for large files almost impossible.
Comments
-
Was able to fix the main prloblem.
I am still receiving a 500 error, however, with really large files.
Below is the fix I implemented.
def flat_file_upload(conn, fileId, chunkSize, file):
'''
:param conn: AnaplanConnection object which contains authorization string, workspace ID, and model ID
:param fileId: ID of the file in the Anaplan model
:param chunkSize: Desired size of the chunk, in megabytes
:param file: Path to the local file to be uploaded to Anaplan
'''
#Setting local variables for connection details
authorization = conn.authorization
workspaceGuid = conn.workspaceGuid
modelGuid = conn.modelGuid
#Restrict users from entering a value for chunkSize greater than 50mb to prevent issues with API server
if chunkSize > 50:
return "Chunk size must be 50mb or less."
else:
#Assigning the size of the local file in MB
file_size = os.stat(file).st_size / __BYTES__
file_data = ""
post_header = {
"Authorization": authorization,
"Content-Type":"application/json"
}
put_header = {
"Authorization": authorization,
"Content-Type":"application/octet-stream"
}
file_metadata_start = {
"id":fileId,
"chunkCount":-1
}
file_metadata_complete = {
"id":fileId,
"chunkCount": file_size / (__BYTES__ * chunkSize)
}
url = __base_url__ + "/" +workspaceGuid + "/models/" + modelGuid + "/files/" + fileId
try:
start_upload_post = requests.post(url, headers=post_header, json=file_metadata_start)
start_upload_post.raise_for_status()
except HTTPError as e:
raise HTTPError(e)
#Confirm that the metadata update for the requested file was OK before proceeding with file upload
if start_upload_post.ok:
complete = True #Variable to track whether file has finished uploaded
with open(file, "rt") as f: #Loop through the file, reading a user-defined number of bytes until complete
chunkNum = 0
while True:
buf=f.readlines(__BYTES__ * chunkSize)
if not buf:
break
file_data = "" ##Added to create new file_data for each new load
for item in buf:
file_data += item
try:
authorization_verify = anaplan_auth.verify_auth_flat_file_upload(authorization)
print(authorization_verify)
authorization = anaplan_auth.refresh_token(authorization) #adding re-authorization for larger files
authorization_verify = anaplan_auth.verify_auth_flat_file_upload(authorization)
print(authorization_verify)
put_header = {
"Authorization": authorization,
"Content-Type":"application/octet-stream"
}
file_upload = requests.put(url + "/chunks/" + str(chunkNum), headers=put_header, data=file_data)
print(file_upload.raise_for_status())
file_upload.raise_for_status()
print("Uploading chunk " + str(chunkNum + 1) +", Status: " + str(file_upload.status_code))
except HTTPError as e:
raise HTTPError(e)
print("Uploading chunk " + str(chunkNum + 1) +", Status: " + str(file_upload.status_code))
if not file_upload.ok:
complete = False #If the upload fails, break the loop and prevent subsequent requests. Print error to screen
print("Error " + str(file_upload.status_code) + '\n' + file_upload.text)
break
else:
chunkNum += 1
if complete:
complete_upload = requests.post(url + "/complete", headers=post_header, json=file_metadata_complete)
if complete_upload.ok:
return "File upload complete, " + str(chunkNum) + " chunk(s) uploaded to the server."
else:
return "There was an error with your request: " + str(complete_upload.status_code) + " " + complete_upload.text
else:
return "There was an error with your request: " + str(start_upload_post.status_code) + " " + start_upload_post.text0 -
Hello,
Did you have a fix on this issue? Facing same issue but no response from anyone and couldn't find this topic troubleshooting anywhere.
0 -
I found a solution. I posted my updated code, although I am still struggling with a 500 error for larger files.
0
Categories
- All Categories
- 2.3K Anaplan Community
- Academy
- Anaplan Talent Builder
- Model Design Course
- The Anaplan Way
- Archive
- 2 Idea exchange
- 62 Enterprise Scale
- 1.1K Extensibility
- 21 Intelligence
- 1.6K Planning & Modeling
- 331 Security
- Community Connections
- Connections
- Experiences
- Groups
- Personas
- Employees
- CS Toolkit
- Customer Care Center
- Forums
- Academy & Training
- Community Feedback & Updates
- Japan
- Anaplan Community Japan
- Anaplan Community Japan Knowledge Base
- HyperCare Japan
- JP-Central
- Support-Japanese
- Partners
- Partner Leadership Council
- Partner Product Council
- 724 Platform
- Anapedia
- App Hub
- Centers Of Excellence
- Extensions
- Planual
- Platform Updates
- 724 User Experience
- Profile Builder
- Resources
- Anaplan Advocates
- Anaplan Live!
- Community
- Community Advancement
- Community Connections
- Partner Program
- The Official Master Anaplanner Program
- Videos
- Welcome to the Anaplan Community!
- Success Central
- Support
- Case Portal Link
- Common Support Questions
- HyperCare Redirect
- Known Issues and Workarounds
- Support test page
- SupportFAQ
- Survey
- 2 Training Day Takeaways